Apr 21 12:01:29.134444 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 21 08:36:33 -00 2026 Apr 21 12:01:29.134478 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 12:01:29.134494 kernel: BIOS-provided physical RAM map: Apr 21 12:01:29.134506 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 21 12:01:29.134517 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Apr 21 12:01:29.134528 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000000437dfff] usable Apr 21 12:01:29.134542 kernel: BIOS-e820: [mem 0x000000000437e000-0x000000000477dfff] reserved Apr 21 12:01:29.134554 kernel: BIOS-e820: [mem 0x000000000477e000-0x000000003ff1efff] usable Apr 21 12:01:29.134569 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ff73fff] type 20 Apr 21 12:01:29.134580 kernel: BIOS-e820: [mem 0x000000003ff74000-0x000000003ffc8fff] reserved Apr 21 12:01:29.134590 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Apr 21 12:01:29.134600 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Apr 21 12:01:29.134610 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Apr 21 12:01:29.134620 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Apr 21 12:01:29.134636 kernel: printk: bootconsole [earlyser0] enabled Apr 21 12:01:29.134653 kernel: NX (Execute Disable) protection: active Apr 21 12:01:29.134665 kernel: APIC: Static calls initialized Apr 21 12:01:29.134679 kernel: efi: EFI v2.7 by Microsoft Apr 21 12:01:29.134692 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3f420418 Apr 21 12:01:29.134706 kernel: SMBIOS 3.1.0 present. Apr 21 12:01:29.134719 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/08/2026 Apr 21 12:01:29.134732 kernel: Hypervisor detected: Microsoft Hyper-V Apr 21 12:01:29.134743 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Apr 21 12:01:29.134753 kernel: Hyper-V: Host Build 10.0.26102.1277-1-0 Apr 21 12:01:29.134764 kernel: Hyper-V: Nested features: 0x1e0101 Apr 21 12:01:29.134779 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Apr 21 12:01:29.134792 kernel: Hyper-V: Using hypercall for remote TLB flush Apr 21 12:01:29.134804 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Apr 21 12:01:29.134816 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Apr 21 12:01:29.134830 kernel: tsc: Marking TSC unstable due to running on Hyper-V Apr 21 12:01:29.134842 kernel: tsc: Detected 2593.906 MHz processor Apr 21 12:01:29.134855 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 21 12:01:29.134868 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 21 12:01:29.134880 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Apr 21 12:01:29.134896 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 21 12:01:29.134909 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 21 12:01:29.134921 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Apr 21 12:01:29.134932 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Apr 21 12:01:29.134945 kernel: Using GB pages for direct mapping Apr 21 12:01:29.134959 kernel: Secure boot disabled Apr 21 12:01:29.134978 kernel: ACPI: Early table checksum verification disabled Apr 21 12:01:29.134996 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Apr 21 12:01:29.135009 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 12:01:29.135022 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 12:01:29.135037 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Apr 21 12:01:29.135051 kernel: ACPI: FACS 0x000000003FFFE000 000040 Apr 21 12:01:29.135066 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 12:01:29.135080 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 12:01:29.135098 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 12:01:29.135112 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 12:01:29.135126 kernel: ACPI: SRAT 0x000000003FFD4000 0001E0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 12:01:29.135140 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 12:01:29.135155 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Apr 21 12:01:29.135169 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Apr 21 12:01:29.135183 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Apr 21 12:01:29.135198 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Apr 21 12:01:29.135211 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Apr 21 12:01:29.135229 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Apr 21 12:01:29.135244 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Apr 21 12:01:29.135258 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd41df] Apr 21 12:01:29.135272 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Apr 21 12:01:29.136328 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 21 12:01:29.136345 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 21 12:01:29.136360 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Apr 21 12:01:29.136374 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Apr 21 12:01:29.136387 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Apr 21 12:01:29.136404 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Apr 21 12:01:29.136419 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Apr 21 12:01:29.136432 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Apr 21 12:01:29.136444 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Apr 21 12:01:29.136457 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Apr 21 12:01:29.136470 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Apr 21 12:01:29.136484 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Apr 21 12:01:29.136497 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Apr 21 12:01:29.136514 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Apr 21 12:01:29.136527 kernel: Zone ranges: Apr 21 12:01:29.136541 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 21 12:01:29.136554 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 21 12:01:29.136567 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Apr 21 12:01:29.136579 kernel: Movable zone start for each node Apr 21 12:01:29.136592 kernel: Early memory node ranges Apr 21 12:01:29.136606 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 21 12:01:29.136620 kernel: node 0: [mem 0x0000000000100000-0x000000000437dfff] Apr 21 12:01:29.136644 kernel: node 0: [mem 0x000000000477e000-0x000000003ff1efff] Apr 21 12:01:29.136660 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Apr 21 12:01:29.136674 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Apr 21 12:01:29.136688 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Apr 21 12:01:29.136703 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 21 12:01:29.136717 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 21 12:01:29.136731 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Apr 21 12:01:29.136746 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Apr 21 12:01:29.136760 kernel: ACPI: PM-Timer IO Port: 0x408 Apr 21 12:01:29.136779 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Apr 21 12:01:29.136794 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Apr 21 12:01:29.136809 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 21 12:01:29.136825 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 21 12:01:29.136839 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Apr 21 12:01:29.136854 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 21 12:01:29.136869 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Apr 21 12:01:29.136883 kernel: Booting paravirtualized kernel on Hyper-V Apr 21 12:01:29.136898 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 21 12:01:29.136916 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 21 12:01:29.136931 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 21 12:01:29.136944 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 21 12:01:29.136957 kernel: pcpu-alloc: [0] 0 1 Apr 21 12:01:29.136969 kernel: Hyper-V: PV spinlocks enabled Apr 21 12:01:29.136982 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 21 12:01:29.136996 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 12:01:29.137011 kernel: random: crng init done Apr 21 12:01:29.137026 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Apr 21 12:01:29.137039 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 21 12:01:29.137052 kernel: Fallback order for Node 0: 0 Apr 21 12:01:29.137064 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2061321 Apr 21 12:01:29.137077 kernel: Policy zone: Normal Apr 21 12:01:29.137089 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 21 12:01:29.137102 kernel: software IO TLB: area num 2. Apr 21 12:01:29.137115 kernel: Memory: 8061212K/8383228K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 321756K reserved, 0K cma-reserved) Apr 21 12:01:29.137128 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 21 12:01:29.137155 kernel: ftrace: allocating 37996 entries in 149 pages Apr 21 12:01:29.137169 kernel: ftrace: allocated 149 pages with 4 groups Apr 21 12:01:29.137183 kernel: Dynamic Preempt: voluntary Apr 21 12:01:29.137200 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 21 12:01:29.137215 kernel: rcu: RCU event tracing is enabled. Apr 21 12:01:29.137229 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 21 12:01:29.137244 kernel: Trampoline variant of Tasks RCU enabled. Apr 21 12:01:29.137259 kernel: Rude variant of Tasks RCU enabled. Apr 21 12:01:29.137274 kernel: Tracing variant of Tasks RCU enabled. Apr 21 12:01:29.137312 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 21 12:01:29.137325 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 21 12:01:29.137337 kernel: Using NULL legacy PIC Apr 21 12:01:29.137350 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Apr 21 12:01:29.137363 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 21 12:01:29.137376 kernel: Console: colour dummy device 80x25 Apr 21 12:01:29.137389 kernel: printk: console [tty1] enabled Apr 21 12:01:29.137401 kernel: printk: console [ttyS0] enabled Apr 21 12:01:29.137417 kernel: printk: bootconsole [earlyser0] disabled Apr 21 12:01:29.137432 kernel: ACPI: Core revision 20230628 Apr 21 12:01:29.137448 kernel: Failed to register legacy timer interrupt Apr 21 12:01:29.137462 kernel: APIC: Switch to symmetric I/O mode setup Apr 21 12:01:29.137477 kernel: Hyper-V: enabling crash_kexec_post_notifiers Apr 21 12:01:29.137490 kernel: Hyper-V: Using IPI hypercalls Apr 21 12:01:29.137503 kernel: APIC: send_IPI() replaced with hv_send_ipi() Apr 21 12:01:29.137516 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Apr 21 12:01:29.137530 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Apr 21 12:01:29.137546 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Apr 21 12:01:29.137559 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Apr 21 12:01:29.137573 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Apr 21 12:01:29.137586 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Apr 21 12:01:29.137600 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 21 12:01:29.137614 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 21 12:01:29.137628 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 21 12:01:29.137641 kernel: Spectre V2 : Mitigation: Retpolines Apr 21 12:01:29.137654 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 21 12:01:29.137667 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 21 12:01:29.137684 kernel: RETBleed: Vulnerable Apr 21 12:01:29.137698 kernel: Speculative Store Bypass: Vulnerable Apr 21 12:01:29.137712 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Apr 21 12:01:29.137725 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 21 12:01:29.137738 kernel: active return thunk: its_return_thunk Apr 21 12:01:29.137752 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 21 12:01:29.137765 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 21 12:01:29.137779 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 21 12:01:29.137793 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 21 12:01:29.137806 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 21 12:01:29.137824 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 21 12:01:29.137838 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 21 12:01:29.137851 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 21 12:01:29.137865 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 21 12:01:29.137878 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 21 12:01:29.137893 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 21 12:01:29.137907 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 21 12:01:29.137921 kernel: Freeing SMP alternatives memory: 32K Apr 21 12:01:29.137934 kernel: pid_max: default: 32768 minimum: 301 Apr 21 12:01:29.137948 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 21 12:01:29.137962 kernel: landlock: Up and running. Apr 21 12:01:29.137976 kernel: SELinux: Initializing. Apr 21 12:01:29.137993 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 21 12:01:29.138007 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 21 12:01:29.138021 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Apr 21 12:01:29.138036 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 12:01:29.138051 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 12:01:29.138066 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 12:01:29.138081 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Apr 21 12:01:29.138095 kernel: signal: max sigframe size: 3632 Apr 21 12:01:29.138109 kernel: rcu: Hierarchical SRCU implementation. Apr 21 12:01:29.138127 kernel: rcu: Max phase no-delay instances is 400. Apr 21 12:01:29.138141 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 21 12:01:29.138156 kernel: smp: Bringing up secondary CPUs ... Apr 21 12:01:29.138170 kernel: smpboot: x86: Booting SMP configuration: Apr 21 12:01:29.138184 kernel: .... node #0, CPUs: #1 Apr 21 12:01:29.138200 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Apr 21 12:01:29.138216 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 21 12:01:29.138230 kernel: smp: Brought up 1 node, 2 CPUs Apr 21 12:01:29.138245 kernel: smpboot: Max logical packages: 1 Apr 21 12:01:29.138263 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Apr 21 12:01:29.139307 kernel: devtmpfs: initialized Apr 21 12:01:29.139335 kernel: x86/mm: Memory block size: 128MB Apr 21 12:01:29.139354 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Apr 21 12:01:29.139373 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 21 12:01:29.139390 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 21 12:01:29.139405 kernel: pinctrl core: initialized pinctrl subsystem Apr 21 12:01:29.139420 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 21 12:01:29.139435 kernel: audit: initializing netlink subsys (disabled) Apr 21 12:01:29.139453 kernel: audit: type=2000 audit(1776772888.030:1): state=initialized audit_enabled=0 res=1 Apr 21 12:01:29.139468 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 21 12:01:29.139483 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 21 12:01:29.139497 kernel: cpuidle: using governor menu Apr 21 12:01:29.139511 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 21 12:01:29.139525 kernel: dca service started, version 1.12.1 Apr 21 12:01:29.139540 kernel: e820: reserve RAM buffer [mem 0x0437e000-0x07ffffff] Apr 21 12:01:29.139553 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Apr 21 12:01:29.139568 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 21 12:01:29.139587 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 21 12:01:29.139602 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 21 12:01:29.139617 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 21 12:01:29.139633 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 21 12:01:29.139648 kernel: ACPI: Added _OSI(Module Device) Apr 21 12:01:29.139663 kernel: ACPI: Added _OSI(Processor Device) Apr 21 12:01:29.139678 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 21 12:01:29.139693 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 21 12:01:29.139711 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 21 12:01:29.139726 kernel: ACPI: Interpreter enabled Apr 21 12:01:29.139741 kernel: ACPI: PM: (supports S0 S5) Apr 21 12:01:29.139756 kernel: ACPI: Using IOAPIC for interrupt routing Apr 21 12:01:29.139772 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 21 12:01:29.139787 kernel: PCI: Ignoring E820 reservations for host bridge windows Apr 21 12:01:29.139802 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Apr 21 12:01:29.139817 kernel: iommu: Default domain type: Translated Apr 21 12:01:29.139832 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 21 12:01:29.139847 kernel: efivars: Registered efivars operations Apr 21 12:01:29.139865 kernel: PCI: Using ACPI for IRQ routing Apr 21 12:01:29.139880 kernel: PCI: System does not support PCI Apr 21 12:01:29.139895 kernel: vgaarb: loaded Apr 21 12:01:29.139910 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Apr 21 12:01:29.139925 kernel: VFS: Disk quotas dquot_6.6.0 Apr 21 12:01:29.139939 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 21 12:01:29.139954 kernel: pnp: PnP ACPI init Apr 21 12:01:29.139969 kernel: pnp: PnP ACPI: found 3 devices Apr 21 12:01:29.139985 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 21 12:01:29.140003 kernel: NET: Registered PF_INET protocol family Apr 21 12:01:29.140018 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 21 12:01:29.140033 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Apr 21 12:01:29.140049 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 21 12:01:29.140064 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 21 12:01:29.140079 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Apr 21 12:01:29.140095 kernel: TCP: Hash tables configured (established 65536 bind 65536) Apr 21 12:01:29.140110 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 21 12:01:29.140125 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 21 12:01:29.140143 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 21 12:01:29.140158 kernel: NET: Registered PF_XDP protocol family Apr 21 12:01:29.140173 kernel: PCI: CLS 0 bytes, default 64 Apr 21 12:01:29.140188 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 21 12:01:29.140205 kernel: software IO TLB: mapped [mem 0x000000003a878000-0x000000003e878000] (64MB) Apr 21 12:01:29.140220 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 21 12:01:29.140236 kernel: Initialise system trusted keyrings Apr 21 12:01:29.140251 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Apr 21 12:01:29.140269 kernel: Key type asymmetric registered Apr 21 12:01:29.141339 kernel: Asymmetric key parser 'x509' registered Apr 21 12:01:29.141358 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 21 12:01:29.141371 kernel: io scheduler mq-deadline registered Apr 21 12:01:29.141385 kernel: io scheduler kyber registered Apr 21 12:01:29.141399 kernel: io scheduler bfq registered Apr 21 12:01:29.141414 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 21 12:01:29.141428 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 21 12:01:29.141439 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 21 12:01:29.141451 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Apr 21 12:01:29.141469 kernel: i8042: PNP: No PS/2 controller found. Apr 21 12:01:29.141644 kernel: rtc_cmos 00:02: registered as rtc0 Apr 21 12:01:29.141757 kernel: rtc_cmos 00:02: setting system clock to 2026-04-21T12:01:28 UTC (1776772888) Apr 21 12:01:29.141854 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Apr 21 12:01:29.141869 kernel: intel_pstate: CPU model not supported Apr 21 12:01:29.141878 kernel: efifb: probing for efifb Apr 21 12:01:29.141891 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Apr 21 12:01:29.141907 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Apr 21 12:01:29.141916 kernel: efifb: scrolling: redraw Apr 21 12:01:29.141926 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 21 12:01:29.141937 kernel: Console: switching to colour frame buffer device 128x48 Apr 21 12:01:29.141945 kernel: fb0: EFI VGA frame buffer device Apr 21 12:01:29.141963 kernel: pstore: Using crash dump compression: deflate Apr 21 12:01:29.141971 kernel: pstore: Registered efi_pstore as persistent store backend Apr 21 12:01:29.141982 kernel: NET: Registered PF_INET6 protocol family Apr 21 12:01:29.141992 kernel: Segment Routing with IPv6 Apr 21 12:01:29.142005 kernel: In-situ OAM (IOAM) with IPv6 Apr 21 12:01:29.142015 kernel: NET: Registered PF_PACKET protocol family Apr 21 12:01:29.142025 kernel: Key type dns_resolver registered Apr 21 12:01:29.142037 kernel: IPI shorthand broadcast: enabled Apr 21 12:01:29.142045 kernel: sched_clock: Marking stable (937003800, 51069900)->(1211802200, -223728500) Apr 21 12:01:29.142053 kernel: registered taskstats version 1 Apr 21 12:01:29.142066 kernel: Loading compiled-in X.509 certificates Apr 21 12:01:29.142075 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: c59d945e31647ab89a50a01beeb265fbb707808b' Apr 21 12:01:29.142087 kernel: Key type .fscrypt registered Apr 21 12:01:29.142097 kernel: Key type fscrypt-provisioning registered Apr 21 12:01:29.142107 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 21 12:01:29.142119 kernel: ima: Allocated hash algorithm: sha1 Apr 21 12:01:29.142127 kernel: ima: No architecture policies found Apr 21 12:01:29.142135 kernel: clk: Disabling unused clocks Apr 21 12:01:29.142143 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 21 12:01:29.142151 kernel: Write protecting the kernel read-only data: 36864k Apr 21 12:01:29.142159 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 21 12:01:29.142168 kernel: Run /init as init process Apr 21 12:01:29.142178 kernel: with arguments: Apr 21 12:01:29.142186 kernel: /init Apr 21 12:01:29.142194 kernel: with environment: Apr 21 12:01:29.142203 kernel: HOME=/ Apr 21 12:01:29.142210 kernel: TERM=linux Apr 21 12:01:29.142221 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 12:01:29.142236 systemd[1]: Detected virtualization microsoft. Apr 21 12:01:29.142245 systemd[1]: Detected architecture x86-64. Apr 21 12:01:29.142256 systemd[1]: Running in initrd. Apr 21 12:01:29.142268 systemd[1]: No hostname configured, using default hostname. Apr 21 12:01:29.142290 systemd[1]: Hostname set to . Apr 21 12:01:29.142300 systemd[1]: Initializing machine ID from random generator. Apr 21 12:01:29.142312 systemd[1]: Queued start job for default target initrd.target. Apr 21 12:01:29.142322 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 12:01:29.142331 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 12:01:29.142340 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 21 12:01:29.142355 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 12:01:29.142365 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 21 12:01:29.142374 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 21 12:01:29.142389 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 21 12:01:29.142398 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 21 12:01:29.142410 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 12:01:29.142420 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 12:01:29.142431 systemd[1]: Reached target paths.target - Path Units. Apr 21 12:01:29.142442 systemd[1]: Reached target slices.target - Slice Units. Apr 21 12:01:29.142453 systemd[1]: Reached target swap.target - Swaps. Apr 21 12:01:29.142461 systemd[1]: Reached target timers.target - Timer Units. Apr 21 12:01:29.142473 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 12:01:29.142483 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 12:01:29.142496 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 21 12:01:29.142506 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 21 12:01:29.142515 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 12:01:29.142531 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 12:01:29.142541 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 12:01:29.142553 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 12:01:29.142561 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 21 12:01:29.142574 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 12:01:29.142583 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 21 12:01:29.142594 systemd[1]: Starting systemd-fsck-usr.service... Apr 21 12:01:29.142605 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 12:01:29.142616 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 12:01:29.142650 systemd-journald[177]: Collecting audit messages is disabled. Apr 21 12:01:29.142675 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 12:01:29.142685 systemd-journald[177]: Journal started Apr 21 12:01:29.142712 systemd-journald[177]: Runtime Journal (/run/log/journal/2ddc536794774acab5f6e5cda497381a) is 8.0M, max 158.7M, 150.7M free. Apr 21 12:01:29.164367 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 12:01:29.164903 systemd-modules-load[178]: Inserted module 'overlay' Apr 21 12:01:29.167800 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 21 12:01:29.174429 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 12:01:29.182326 systemd[1]: Finished systemd-fsck-usr.service. Apr 21 12:01:29.187303 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 12:01:29.201543 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 12:01:29.213443 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 12:01:29.217130 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 21 12:01:29.228469 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 12:01:29.243650 kernel: Bridge firewalling registered Apr 21 12:01:29.242576 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 12:01:29.250128 systemd-modules-load[178]: Inserted module 'br_netfilter' Apr 21 12:01:29.253717 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 12:01:29.265523 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 12:01:29.276464 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 12:01:29.280226 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 12:01:29.293573 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 12:01:29.301025 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 12:01:29.308816 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 12:01:29.320510 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 21 12:01:29.328478 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 12:01:29.339991 dracut-cmdline[214]: dracut-dracut-053 Apr 21 12:01:29.345320 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 12:01:29.395563 systemd-resolved[219]: Positive Trust Anchors: Apr 21 12:01:29.395583 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 12:01:29.395635 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 12:01:29.425214 systemd-resolved[219]: Defaulting to hostname 'linux'. Apr 21 12:01:29.430038 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 12:01:29.439310 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 12:01:29.448014 kernel: SCSI subsystem initialized Apr 21 12:01:29.458298 kernel: Loading iSCSI transport class v2.0-870. Apr 21 12:01:29.470304 kernel: iscsi: registered transport (tcp) Apr 21 12:01:29.491934 kernel: iscsi: registered transport (qla4xxx) Apr 21 12:01:29.492008 kernel: QLogic iSCSI HBA Driver Apr 21 12:01:29.530900 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 21 12:01:29.540538 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 21 12:01:29.568299 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 21 12:01:29.568370 kernel: device-mapper: uevent: version 1.0.3 Apr 21 12:01:29.570295 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 21 12:01:29.614306 kernel: raid6: avx512x4 gen() 18229 MB/s Apr 21 12:01:29.634295 kernel: raid6: avx512x2 gen() 18238 MB/s Apr 21 12:01:29.653288 kernel: raid6: avx512x1 gen() 18131 MB/s Apr 21 12:01:29.672288 kernel: raid6: avx2x4 gen() 18231 MB/s Apr 21 12:01:29.692295 kernel: raid6: avx2x2 gen() 18166 MB/s Apr 21 12:01:29.712934 kernel: raid6: avx2x1 gen() 13845 MB/s Apr 21 12:01:29.712964 kernel: raid6: using algorithm avx512x2 gen() 18238 MB/s Apr 21 12:01:29.734738 kernel: raid6: .... xor() 30514 MB/s, rmw enabled Apr 21 12:01:29.734765 kernel: raid6: using avx512x2 recovery algorithm Apr 21 12:01:29.758309 kernel: xor: automatically using best checksumming function avx Apr 21 12:01:29.907313 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 21 12:01:29.917489 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 21 12:01:29.929487 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 12:01:29.944821 systemd-udevd[399]: Using default interface naming scheme 'v255'. Apr 21 12:01:29.949670 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 12:01:29.970468 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 21 12:01:29.986252 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Apr 21 12:01:30.017716 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 12:01:30.027497 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 12:01:30.071172 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 12:01:30.088484 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 21 12:01:30.120180 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 21 12:01:30.130097 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 12:01:30.138866 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 12:01:30.146660 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 12:01:30.159538 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 21 12:01:30.187958 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 12:01:30.193293 kernel: cryptd: max_cpu_qlen set to 1000 Apr 21 12:01:30.195346 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 12:01:30.199772 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 12:01:30.203635 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 12:01:30.203819 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 12:01:30.207460 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 12:01:30.238270 kernel: AVX2 version of gcm_enc/dec engaged. Apr 21 12:01:30.245165 kernel: AES CTR mode by8 optimization enabled Apr 21 12:01:30.237909 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 12:01:30.244544 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 21 12:01:30.260252 kernel: hv_vmbus: Vmbus version:5.2 Apr 21 12:01:30.261003 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 12:01:30.265430 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 12:01:30.283528 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 12:01:30.306306 kernel: hv_vmbus: registering driver hyperv_keyboard Apr 21 12:01:30.324566 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 21 12:01:30.324651 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 21 12:01:30.329295 kernel: hv_vmbus: registering driver hv_netvsc Apr 21 12:01:30.329344 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Apr 21 12:01:30.337599 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 12:01:30.348454 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 12:01:30.363599 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 21 12:01:30.369318 kernel: PTP clock support registered Apr 21 12:01:30.395606 kernel: hv_utils: Registering HyperV Utility Driver Apr 21 12:01:30.395681 kernel: hv_vmbus: registering driver hv_utils Apr 21 12:01:30.395694 kernel: hv_vmbus: registering driver hid_hyperv Apr 21 12:01:30.398309 kernel: hv_vmbus: registering driver hv_storvsc Apr 21 12:01:30.404310 kernel: hv_utils: Heartbeat IC version 3.0 Apr 21 12:01:30.401514 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 12:01:30.417192 kernel: hv_utils: Shutdown IC version 3.2 Apr 21 12:01:30.417222 kernel: hv_utils: TimeSync IC version 4.0 Apr 21 12:01:31.457426 kernel: scsi host0: storvsc_host_t Apr 21 12:01:31.457713 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Apr 21 12:01:31.457731 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Apr 21 12:01:31.457793 systemd-resolved[219]: Clock change detected. Flushing caches. Apr 21 12:01:31.475197 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Apr 21 12:01:31.475465 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Apr 21 12:01:31.475708 kernel: scsi host1: storvsc_host_t Apr 21 12:01:31.495255 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Apr 21 12:01:31.496218 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 21 12:01:31.496257 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Apr 21 12:01:31.509523 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Apr 21 12:01:31.509884 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Apr 21 12:01:31.510074 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 21 12:01:31.515891 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Apr 21 12:01:31.516232 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Apr 21 12:01:31.527526 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#299 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 21 12:01:31.527773 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 12:01:31.530511 kernel: hv_netvsc 000d3ad9-878c-000d-3ad9-878c000d3ad9 eth0: VF slot 1 added Apr 21 12:01:31.535892 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 21 12:01:31.557529 kernel: hv_vmbus: registering driver hv_pci Apr 21 12:01:31.562513 kernel: hv_pci d62de3a2-5aeb-4dad-b259-d747d26d640f: PCI VMBus probing: Using version 0x10004 Apr 21 12:01:31.576683 kernel: hv_pci d62de3a2-5aeb-4dad-b259-d747d26d640f: PCI host bridge to bus 5aeb:00 Apr 21 12:01:31.576953 kernel: pci_bus 5aeb:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Apr 21 12:01:31.577105 kernel: pci_bus 5aeb:00: No busn resource found for root bus, will use [bus 00-ff] Apr 21 12:01:31.588368 kernel: pci 5aeb:00:02.0: [15b3:1016] type 00 class 0x020000 Apr 21 12:01:31.588464 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#273 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 21 12:01:31.588672 kernel: pci 5aeb:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Apr 21 12:01:31.594075 kernel: pci 5aeb:00:02.0: enabling Extended Tags Apr 21 12:01:31.612616 kernel: pci 5aeb:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 5aeb:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Apr 21 12:01:31.619643 kernel: pci_bus 5aeb:00: busn_res: [bus 00-ff] end is updated to 00 Apr 21 12:01:31.619984 kernel: pci 5aeb:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Apr 21 12:01:31.787410 kernel: mlx5_core 5aeb:00:02.0: enabling device (0000 -> 0002) Apr 21 12:01:31.792526 kernel: mlx5_core 5aeb:00:02.0: firmware version: 14.30.5026 Apr 21 12:01:32.005916 kernel: hv_netvsc 000d3ad9-878c-000d-3ad9-878c000d3ad9 eth0: VF registering: eth1 Apr 21 12:01:32.006300 kernel: mlx5_core 5aeb:00:02.0 eth1: joined to eth0 Apr 21 12:01:32.010523 kernel: mlx5_core 5aeb:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Apr 21 12:01:32.021542 kernel: mlx5_core 5aeb:00:02.0 enP23275s1: renamed from eth1 Apr 21 12:01:32.073971 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Apr 21 12:01:32.094625 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (467) Apr 21 12:01:32.110140 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 21 12:01:32.138320 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Apr 21 12:01:32.159522 kernel: BTRFS: device fsid 4627a20b-c3ad-458e-a05a-90623574a539 devid 1 transid 31 /dev/sda3 scanned by (udev-worker) (441) Apr 21 12:01:32.174592 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Apr 21 12:01:32.178749 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Apr 21 12:01:32.190793 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 21 12:01:33.222515 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 12:01:33.222985 disk-uuid[605]: The operation has completed successfully. Apr 21 12:01:33.308816 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 21 12:01:33.308940 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 21 12:01:33.337667 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 21 12:01:33.345356 sh[721]: Success Apr 21 12:01:33.377562 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 21 12:01:33.682009 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 21 12:01:33.698623 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 21 12:01:33.706839 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 21 12:01:33.727988 kernel: BTRFS info (device dm-0): first mount of filesystem 4627a20b-c3ad-458e-a05a-90623574a539 Apr 21 12:01:33.728065 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 21 12:01:33.732233 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 21 12:01:33.735601 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 21 12:01:33.738347 kernel: BTRFS info (device dm-0): using free space tree Apr 21 12:01:34.169795 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 21 12:01:34.176234 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 21 12:01:34.189686 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 21 12:01:34.196505 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 21 12:01:34.223184 kernel: BTRFS info (device sda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 12:01:34.223262 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 12:01:34.227217 kernel: BTRFS info (device sda6): using free space tree Apr 21 12:01:34.277523 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 12:01:34.289890 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 21 12:01:34.299763 kernel: BTRFS info (device sda6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 12:01:34.301453 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 21 12:01:34.307950 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 12:01:34.319666 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 21 12:01:34.327713 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 12:01:34.351691 systemd-networkd[905]: lo: Link UP Apr 21 12:01:34.351703 systemd-networkd[905]: lo: Gained carrier Apr 21 12:01:34.354035 systemd-networkd[905]: Enumeration completed Apr 21 12:01:34.354332 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 12:01:34.360177 systemd-networkd[905]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 12:01:34.360183 systemd-networkd[905]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 12:01:34.360287 systemd[1]: Reached target network.target - Network. Apr 21 12:01:34.421524 kernel: mlx5_core 5aeb:00:02.0 enP23275s1: Link up Apr 21 12:01:34.452523 kernel: hv_netvsc 000d3ad9-878c-000d-3ad9-878c000d3ad9 eth0: Data path switched to VF: enP23275s1 Apr 21 12:01:34.453033 systemd-networkd[905]: enP23275s1: Link UP Apr 21 12:01:34.453182 systemd-networkd[905]: eth0: Link UP Apr 21 12:01:34.453395 systemd-networkd[905]: eth0: Gained carrier Apr 21 12:01:34.453410 systemd-networkd[905]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 12:01:34.468799 systemd-networkd[905]: enP23275s1: Gained carrier Apr 21 12:01:34.484543 systemd-networkd[905]: eth0: DHCPv4 address 10.0.0.5/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 21 12:01:35.407415 ignition[904]: Ignition 2.19.0 Apr 21 12:01:35.407430 ignition[904]: Stage: fetch-offline Apr 21 12:01:35.407489 ignition[904]: no configs at "/usr/lib/ignition/base.d" Apr 21 12:01:35.407519 ignition[904]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 21 12:01:35.407658 ignition[904]: parsed url from cmdline: "" Apr 21 12:01:35.407663 ignition[904]: no config URL provided Apr 21 12:01:35.407671 ignition[904]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 12:01:35.407683 ignition[904]: no config at "/usr/lib/ignition/user.ign" Apr 21 12:01:35.407690 ignition[904]: failed to fetch config: resource requires networking Apr 21 12:01:35.407900 ignition[904]: Ignition finished successfully Apr 21 12:01:35.432302 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 12:01:35.442705 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 21 12:01:35.459951 ignition[914]: Ignition 2.19.0 Apr 21 12:01:35.459965 ignition[914]: Stage: fetch Apr 21 12:01:35.460193 ignition[914]: no configs at "/usr/lib/ignition/base.d" Apr 21 12:01:35.460207 ignition[914]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 21 12:01:35.460350 ignition[914]: parsed url from cmdline: "" Apr 21 12:01:35.460354 ignition[914]: no config URL provided Apr 21 12:01:35.460359 ignition[914]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 12:01:35.460368 ignition[914]: no config at "/usr/lib/ignition/user.ign" Apr 21 12:01:35.460387 ignition[914]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Apr 21 12:01:35.559771 ignition[914]: GET result: OK Apr 21 12:01:35.559914 ignition[914]: config has been read from IMDS userdata Apr 21 12:01:35.559949 ignition[914]: parsing config with SHA512: 41f14856ee7c6de4122623b0c495909a13efe270b09abfb242f0f9ad997c2bfab14ce3acdea8a9245d976f2d54cd8ad14410afce1966c374294f88018a3b8710 Apr 21 12:01:35.566397 unknown[914]: fetched base config from "system" Apr 21 12:01:35.566409 unknown[914]: fetched base config from "system" Apr 21 12:01:35.566965 ignition[914]: fetch: fetch complete Apr 21 12:01:35.566418 unknown[914]: fetched user config from "azure" Apr 21 12:01:35.566971 ignition[914]: fetch: fetch passed Apr 21 12:01:35.567017 ignition[914]: Ignition finished successfully Apr 21 12:01:35.580988 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 21 12:01:35.592706 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 21 12:01:35.609862 ignition[920]: Ignition 2.19.0 Apr 21 12:01:35.609875 ignition[920]: Stage: kargs Apr 21 12:01:35.613154 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 21 12:01:35.610118 ignition[920]: no configs at "/usr/lib/ignition/base.d" Apr 21 12:01:35.610132 ignition[920]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 21 12:01:35.611088 ignition[920]: kargs: kargs passed Apr 21 12:01:35.611143 ignition[920]: Ignition finished successfully Apr 21 12:01:35.628756 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 21 12:01:35.647614 ignition[926]: Ignition 2.19.0 Apr 21 12:01:35.647628 ignition[926]: Stage: disks Apr 21 12:01:35.651778 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 21 12:01:35.647865 ignition[926]: no configs at "/usr/lib/ignition/base.d" Apr 21 12:01:35.647877 ignition[926]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 21 12:01:35.650408 ignition[926]: disks: disks passed Apr 21 12:01:35.650457 ignition[926]: Ignition finished successfully Apr 21 12:01:35.666594 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 21 12:01:35.677370 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 21 12:01:35.681158 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 12:01:35.687803 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 12:01:35.691158 systemd[1]: Reached target basic.target - Basic System. Apr 21 12:01:35.703766 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 21 12:01:35.789233 systemd-fsck[934]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Apr 21 12:01:35.788882 systemd-networkd[905]: eth0: Gained IPv6LL Apr 21 12:01:35.796679 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 21 12:01:35.807714 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 21 12:01:35.906532 kernel: EXT4-fs (sda9): mounted filesystem fd5e5f40-ad85-46ea-abb5-3cc3d4cd8af5 r/w with ordered data mode. Quota mode: none. Apr 21 12:01:35.906749 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 21 12:01:35.907461 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 21 12:01:35.943597 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 12:01:35.963522 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (945) Apr 21 12:01:35.974525 kernel: BTRFS info (device sda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 12:01:35.974606 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 12:01:35.974634 kernel: BTRFS info (device sda6): using free space tree Apr 21 12:01:35.980691 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 21 12:01:35.987367 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 12:01:35.988468 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 21 12:01:35.995738 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 21 12:01:35.995780 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 12:01:36.009775 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 12:01:36.012507 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 21 12:01:36.025680 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 21 12:01:36.901427 coreos-metadata[962]: Apr 21 12:01:36.901 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 21 12:01:36.907578 coreos-metadata[962]: Apr 21 12:01:36.907 INFO Fetch successful Apr 21 12:01:36.910796 coreos-metadata[962]: Apr 21 12:01:36.910 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Apr 21 12:01:36.922520 coreos-metadata[962]: Apr 21 12:01:36.922 INFO Fetch successful Apr 21 12:01:36.939213 coreos-metadata[962]: Apr 21 12:01:36.939 INFO wrote hostname ci-4081.3.7-a-fffe528a55 to /sysroot/etc/hostname Apr 21 12:01:36.944106 initrd-setup-root[973]: cut: /sysroot/etc/passwd: No such file or directory Apr 21 12:01:36.944793 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 21 12:01:36.971370 initrd-setup-root[981]: cut: /sysroot/etc/group: No such file or directory Apr 21 12:01:36.979281 initrd-setup-root[988]: cut: /sysroot/etc/shadow: No such file or directory Apr 21 12:01:36.985721 initrd-setup-root[995]: cut: /sysroot/etc/gshadow: No such file or directory Apr 21 12:01:37.773843 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 21 12:01:37.789687 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 21 12:01:37.796696 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 21 12:01:37.809278 kernel: BTRFS info (device sda6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 12:01:37.810924 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 21 12:01:37.843543 ignition[1067]: INFO : Ignition 2.19.0 Apr 21 12:01:37.843543 ignition[1067]: INFO : Stage: mount Apr 21 12:01:37.843543 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 12:01:37.843543 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 21 12:01:37.862481 ignition[1067]: INFO : mount: mount passed Apr 21 12:01:37.862481 ignition[1067]: INFO : Ignition finished successfully Apr 21 12:01:37.848873 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 21 12:01:37.855804 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 21 12:01:37.874697 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 21 12:01:37.883030 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 12:01:37.906519 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1078) Apr 21 12:01:37.915048 kernel: BTRFS info (device sda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 12:01:37.915130 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 12:01:37.917898 kernel: BTRFS info (device sda6): using free space tree Apr 21 12:01:37.926525 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 12:01:37.927433 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 12:01:37.957387 ignition[1095]: INFO : Ignition 2.19.0 Apr 21 12:01:37.960133 ignition[1095]: INFO : Stage: files Apr 21 12:01:37.962407 ignition[1095]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 12:01:37.965600 ignition[1095]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 21 12:01:37.965600 ignition[1095]: DEBUG : files: compiled without relabeling support, skipping Apr 21 12:01:37.973454 ignition[1095]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 21 12:01:37.977826 ignition[1095]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 21 12:01:38.147236 ignition[1095]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 21 12:01:38.151568 ignition[1095]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 21 12:01:38.156038 unknown[1095]: wrote ssh authorized keys file for user: core Apr 21 12:01:38.159341 ignition[1095]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 21 12:01:38.190943 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 21 12:01:38.196939 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 21 12:01:38.196939 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 12:01:38.196939 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 21 12:01:38.248269 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 21 12:01:38.290633 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 12:01:38.297007 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 21 12:01:38.297007 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 21 12:01:38.297007 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 21 12:01:38.313549 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 21 12:01:38.313549 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 12:01:38.313549 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 12:01:38.313549 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 12:01:38.313549 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 12:01:38.313549 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 12:01:38.313549 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 12:01:38.313549 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 12:01:38.313549 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 12:01:38.313549 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 12:01:38.313549 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 21 12:01:38.673812 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 21 12:01:39.029284 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 12:01:39.029284 ignition[1095]: INFO : files: op(c): [started] processing unit "containerd.service" Apr 21 12:01:39.052262 ignition[1095]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 21 12:01:39.060155 ignition[1095]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 21 12:01:39.060155 ignition[1095]: INFO : files: op(c): [finished] processing unit "containerd.service" Apr 21 12:01:39.060155 ignition[1095]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Apr 21 12:01:39.079533 ignition[1095]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 12:01:39.079533 ignition[1095]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 12:01:39.079533 ignition[1095]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Apr 21 12:01:39.079533 ignition[1095]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Apr 21 12:01:39.079533 ignition[1095]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Apr 21 12:01:39.079533 ignition[1095]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 21 12:01:39.079533 ignition[1095]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 21 12:01:39.079533 ignition[1095]: INFO : files: files passed Apr 21 12:01:39.079533 ignition[1095]: INFO : Ignition finished successfully Apr 21 12:01:39.073465 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 21 12:01:39.128739 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 21 12:01:39.138866 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 21 12:01:39.153879 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 21 12:01:39.154003 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 21 12:01:39.169485 initrd-setup-root-after-ignition[1126]: grep: Apr 21 12:01:39.169485 initrd-setup-root-after-ignition[1123]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 12:01:39.169485 initrd-setup-root-after-ignition[1123]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 21 12:01:39.180545 initrd-setup-root-after-ignition[1126]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 12:01:39.185571 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 12:01:39.185920 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 21 12:01:39.196737 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 21 12:01:39.224621 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 21 12:01:39.226540 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 21 12:01:39.234565 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 21 12:01:39.240372 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 21 12:01:39.243448 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 21 12:01:39.251743 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 21 12:01:39.270177 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 12:01:39.280698 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 21 12:01:39.293980 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 21 12:01:39.300618 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 12:01:39.307262 systemd[1]: Stopped target timers.target - Timer Units. Apr 21 12:01:39.312299 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 21 12:01:39.312446 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 12:01:39.316563 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 21 12:01:39.318665 systemd[1]: Stopped target basic.target - Basic System. Apr 21 12:01:39.319092 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 21 12:01:39.319572 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 12:01:39.320047 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 21 12:01:39.320965 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 21 12:01:39.339899 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 12:01:39.342760 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 21 12:01:39.343212 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 21 12:01:39.343700 systemd[1]: Stopped target swap.target - Swaps. Apr 21 12:01:39.344145 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 21 12:01:39.344274 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 21 12:01:39.349244 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 21 12:01:39.349733 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 12:01:39.350148 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 21 12:01:39.385286 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 12:01:39.389254 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 21 12:01:39.389405 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 21 12:01:39.400565 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 21 12:01:39.400799 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 12:01:39.407274 systemd[1]: ignition-files.service: Deactivated successfully. Apr 21 12:01:39.407398 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 21 12:01:39.415828 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 21 12:01:39.415946 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 21 12:01:39.450011 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 21 12:01:39.450171 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 21 12:01:39.452638 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 12:01:39.466218 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 21 12:01:39.469548 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 21 12:01:39.475431 ignition[1147]: INFO : Ignition 2.19.0 Apr 21 12:01:39.475431 ignition[1147]: INFO : Stage: umount Apr 21 12:01:39.475431 ignition[1147]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 12:01:39.475431 ignition[1147]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 21 12:01:39.475431 ignition[1147]: INFO : umount: umount passed Apr 21 12:01:39.475431 ignition[1147]: INFO : Ignition finished successfully Apr 21 12:01:39.469784 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 12:01:39.481179 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 21 12:01:39.481321 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 12:01:39.490062 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 21 12:01:39.490168 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 21 12:01:39.497772 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 21 12:01:39.497880 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 21 12:01:39.507756 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 21 12:01:39.507883 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 21 12:01:39.513358 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 21 12:01:39.513417 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 21 12:01:39.522169 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 21 12:01:39.522229 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 21 12:01:39.552729 systemd[1]: Stopped target network.target - Network. Apr 21 12:01:39.555404 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 21 12:01:39.555516 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 12:01:39.559092 systemd[1]: Stopped target paths.target - Path Units. Apr 21 12:01:39.561773 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 21 12:01:39.564541 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 12:01:39.570586 systemd[1]: Stopped target slices.target - Slice Units. Apr 21 12:01:39.573302 systemd[1]: Stopped target sockets.target - Socket Units. Apr 21 12:01:39.576060 systemd[1]: iscsid.socket: Deactivated successfully. Apr 21 12:01:39.576122 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 12:01:39.579201 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 21 12:01:39.579252 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 12:01:39.590230 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 21 12:01:39.590303 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 21 12:01:39.616356 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 21 12:01:39.616449 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 21 12:01:39.625104 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 21 12:01:39.630849 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 21 12:01:39.634547 systemd-networkd[905]: eth0: DHCPv6 lease lost Apr 21 12:01:39.638991 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 21 12:01:39.642086 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 21 12:01:39.644648 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 21 12:01:39.651116 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 21 12:01:39.651180 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 21 12:01:39.664617 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 21 12:01:39.670159 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 21 12:01:39.670241 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 12:01:39.680864 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 12:01:39.682349 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 21 12:01:39.682476 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 21 12:01:39.704741 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 21 12:01:39.704827 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 21 12:01:39.708521 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 21 12:01:39.708590 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 21 12:01:39.712404 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 21 12:01:39.712461 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 12:01:39.716948 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 21 12:01:39.717102 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 12:01:39.729215 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 21 12:01:39.729330 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 21 12:01:39.733990 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 21 12:01:39.734039 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 12:01:39.734426 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 21 12:01:39.734472 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 21 12:01:39.735428 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 21 12:01:39.735479 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 21 12:01:39.779515 kernel: hv_netvsc 000d3ad9-878c-000d-3ad9-878c000d3ad9 eth0: Data path switched from VF: enP23275s1 Apr 21 12:01:39.779668 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 12:01:39.779768 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 12:01:39.793681 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 21 12:01:39.797538 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 21 12:01:39.797608 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 12:01:39.806989 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 12:01:39.807057 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 12:01:39.820058 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 21 12:01:39.820209 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 21 12:01:39.825790 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 21 12:01:39.825887 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 21 12:01:40.366687 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 21 12:01:40.366835 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 21 12:01:40.371117 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 21 12:01:40.377038 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 21 12:01:40.377120 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 21 12:01:40.393771 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 21 12:01:40.549005 systemd[1]: Switching root. Apr 21 12:01:40.582982 systemd-journald[177]: Journal stopped Apr 21 12:01:29.134444 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 21 08:36:33 -00 2026 Apr 21 12:01:29.134478 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 12:01:29.134494 kernel: BIOS-provided physical RAM map: Apr 21 12:01:29.134506 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 21 12:01:29.134517 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Apr 21 12:01:29.134528 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000000437dfff] usable Apr 21 12:01:29.134542 kernel: BIOS-e820: [mem 0x000000000437e000-0x000000000477dfff] reserved Apr 21 12:01:29.134554 kernel: BIOS-e820: [mem 0x000000000477e000-0x000000003ff1efff] usable Apr 21 12:01:29.134569 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ff73fff] type 20 Apr 21 12:01:29.134580 kernel: BIOS-e820: [mem 0x000000003ff74000-0x000000003ffc8fff] reserved Apr 21 12:01:29.134590 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Apr 21 12:01:29.134600 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Apr 21 12:01:29.134610 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Apr 21 12:01:29.134620 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Apr 21 12:01:29.134636 kernel: printk: bootconsole [earlyser0] enabled Apr 21 12:01:29.134653 kernel: NX (Execute Disable) protection: active Apr 21 12:01:29.134665 kernel: APIC: Static calls initialized Apr 21 12:01:29.134679 kernel: efi: EFI v2.7 by Microsoft Apr 21 12:01:29.134692 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3f420418 Apr 21 12:01:29.134706 kernel: SMBIOS 3.1.0 present. Apr 21 12:01:29.134719 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/08/2026 Apr 21 12:01:29.134732 kernel: Hypervisor detected: Microsoft Hyper-V Apr 21 12:01:29.134743 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Apr 21 12:01:29.134753 kernel: Hyper-V: Host Build 10.0.26102.1277-1-0 Apr 21 12:01:29.134764 kernel: Hyper-V: Nested features: 0x1e0101 Apr 21 12:01:29.134779 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Apr 21 12:01:29.134792 kernel: Hyper-V: Using hypercall for remote TLB flush Apr 21 12:01:29.134804 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Apr 21 12:01:29.134816 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Apr 21 12:01:29.134830 kernel: tsc: Marking TSC unstable due to running on Hyper-V Apr 21 12:01:29.134842 kernel: tsc: Detected 2593.906 MHz processor Apr 21 12:01:29.134855 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 21 12:01:29.134868 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 21 12:01:29.134880 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Apr 21 12:01:29.134896 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 21 12:01:29.134909 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 21 12:01:29.134921 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Apr 21 12:01:29.134932 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Apr 21 12:01:29.134945 kernel: Using GB pages for direct mapping Apr 21 12:01:29.134959 kernel: Secure boot disabled Apr 21 12:01:29.134978 kernel: ACPI: Early table checksum verification disabled Apr 21 12:01:29.134996 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Apr 21 12:01:29.135009 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 12:01:29.135022 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 12:01:29.135037 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Apr 21 12:01:29.135051 kernel: ACPI: FACS 0x000000003FFFE000 000040 Apr 21 12:01:29.135066 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 12:01:29.135080 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 12:01:29.135098 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 12:01:29.135112 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 12:01:29.135126 kernel: ACPI: SRAT 0x000000003FFD4000 0001E0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 12:01:29.135140 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 12:01:29.135155 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Apr 21 12:01:29.135169 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Apr 21 12:01:29.135183 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Apr 21 12:01:29.135198 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Apr 21 12:01:29.135211 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Apr 21 12:01:29.135229 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Apr 21 12:01:29.135244 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Apr 21 12:01:29.135258 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd41df] Apr 21 12:01:29.135272 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Apr 21 12:01:29.136328 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 21 12:01:29.136345 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 21 12:01:29.136360 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Apr 21 12:01:29.136374 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Apr 21 12:01:29.136387 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Apr 21 12:01:29.136404 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Apr 21 12:01:29.136419 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Apr 21 12:01:29.136432 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Apr 21 12:01:29.136444 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Apr 21 12:01:29.136457 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Apr 21 12:01:29.136470 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Apr 21 12:01:29.136484 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Apr 21 12:01:29.136497 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Apr 21 12:01:29.136514 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Apr 21 12:01:29.136527 kernel: Zone ranges: Apr 21 12:01:29.136541 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 21 12:01:29.136554 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 21 12:01:29.136567 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Apr 21 12:01:29.136579 kernel: Movable zone start for each node Apr 21 12:01:29.136592 kernel: Early memory node ranges Apr 21 12:01:29.136606 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 21 12:01:29.136620 kernel: node 0: [mem 0x0000000000100000-0x000000000437dfff] Apr 21 12:01:29.136644 kernel: node 0: [mem 0x000000000477e000-0x000000003ff1efff] Apr 21 12:01:29.136660 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Apr 21 12:01:29.136674 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Apr 21 12:01:29.136688 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Apr 21 12:01:29.136703 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 21 12:01:29.136717 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 21 12:01:29.136731 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Apr 21 12:01:29.136746 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Apr 21 12:01:29.136760 kernel: ACPI: PM-Timer IO Port: 0x408 Apr 21 12:01:29.136779 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Apr 21 12:01:29.136794 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Apr 21 12:01:29.136809 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 21 12:01:29.136825 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 21 12:01:29.136839 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Apr 21 12:01:29.136854 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 21 12:01:29.136869 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Apr 21 12:01:29.136883 kernel: Booting paravirtualized kernel on Hyper-V Apr 21 12:01:29.136898 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 21 12:01:29.136916 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 21 12:01:29.136931 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 21 12:01:29.136944 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 21 12:01:29.136957 kernel: pcpu-alloc: [0] 0 1 Apr 21 12:01:29.136969 kernel: Hyper-V: PV spinlocks enabled Apr 21 12:01:29.136982 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 21 12:01:29.136996 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 12:01:29.137011 kernel: random: crng init done Apr 21 12:01:29.137026 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Apr 21 12:01:29.137039 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 21 12:01:29.137052 kernel: Fallback order for Node 0: 0 Apr 21 12:01:29.137064 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2061321 Apr 21 12:01:29.137077 kernel: Policy zone: Normal Apr 21 12:01:29.137089 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 21 12:01:29.137102 kernel: software IO TLB: area num 2. Apr 21 12:01:29.137115 kernel: Memory: 8061212K/8383228K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 321756K reserved, 0K cma-reserved) Apr 21 12:01:29.137128 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 21 12:01:29.137155 kernel: ftrace: allocating 37996 entries in 149 pages Apr 21 12:01:29.137169 kernel: ftrace: allocated 149 pages with 4 groups Apr 21 12:01:29.137183 kernel: Dynamic Preempt: voluntary Apr 21 12:01:29.137200 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 21 12:01:29.137215 kernel: rcu: RCU event tracing is enabled. Apr 21 12:01:29.137229 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 21 12:01:29.137244 kernel: Trampoline variant of Tasks RCU enabled. Apr 21 12:01:29.137259 kernel: Rude variant of Tasks RCU enabled. Apr 21 12:01:29.137274 kernel: Tracing variant of Tasks RCU enabled. Apr 21 12:01:29.137312 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 21 12:01:29.137325 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 21 12:01:29.137337 kernel: Using NULL legacy PIC Apr 21 12:01:29.137350 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Apr 21 12:01:29.137363 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 21 12:01:29.137376 kernel: Console: colour dummy device 80x25 Apr 21 12:01:29.137389 kernel: printk: console [tty1] enabled Apr 21 12:01:29.137401 kernel: printk: console [ttyS0] enabled Apr 21 12:01:29.137417 kernel: printk: bootconsole [earlyser0] disabled Apr 21 12:01:29.137432 kernel: ACPI: Core revision 20230628 Apr 21 12:01:29.137448 kernel: Failed to register legacy timer interrupt Apr 21 12:01:29.137462 kernel: APIC: Switch to symmetric I/O mode setup Apr 21 12:01:29.137477 kernel: Hyper-V: enabling crash_kexec_post_notifiers Apr 21 12:01:29.137490 kernel: Hyper-V: Using IPI hypercalls Apr 21 12:01:29.137503 kernel: APIC: send_IPI() replaced with hv_send_ipi() Apr 21 12:01:29.137516 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Apr 21 12:01:29.137530 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Apr 21 12:01:29.137546 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Apr 21 12:01:29.137559 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Apr 21 12:01:29.137573 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Apr 21 12:01:29.137586 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Apr 21 12:01:29.137600 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 21 12:01:29.137614 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 21 12:01:29.137628 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 21 12:01:29.137641 kernel: Spectre V2 : Mitigation: Retpolines Apr 21 12:01:29.137654 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 21 12:01:29.137667 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 21 12:01:29.137684 kernel: RETBleed: Vulnerable Apr 21 12:01:29.137698 kernel: Speculative Store Bypass: Vulnerable Apr 21 12:01:29.137712 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Apr 21 12:01:29.137725 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 21 12:01:29.137738 kernel: active return thunk: its_return_thunk Apr 21 12:01:29.137752 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 21 12:01:29.137765 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 21 12:01:29.137779 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 21 12:01:29.137793 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 21 12:01:29.137806 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 21 12:01:29.137824 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 21 12:01:29.137838 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 21 12:01:29.137851 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 21 12:01:29.137865 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 21 12:01:29.137878 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 21 12:01:29.137893 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 21 12:01:29.137907 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 21 12:01:29.137921 kernel: Freeing SMP alternatives memory: 32K Apr 21 12:01:29.137934 kernel: pid_max: default: 32768 minimum: 301 Apr 21 12:01:29.137948 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 21 12:01:29.137962 kernel: landlock: Up and running. Apr 21 12:01:29.137976 kernel: SELinux: Initializing. Apr 21 12:01:29.137993 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 21 12:01:29.138007 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 21 12:01:29.138021 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Apr 21 12:01:29.138036 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 12:01:29.138051 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 12:01:29.138066 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 12:01:29.138081 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Apr 21 12:01:29.138095 kernel: signal: max sigframe size: 3632 Apr 21 12:01:29.138109 kernel: rcu: Hierarchical SRCU implementation. Apr 21 12:01:29.138127 kernel: rcu: Max phase no-delay instances is 400. Apr 21 12:01:29.138141 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 21 12:01:29.138156 kernel: smp: Bringing up secondary CPUs ... Apr 21 12:01:29.138170 kernel: smpboot: x86: Booting SMP configuration: Apr 21 12:01:29.138184 kernel: .... node #0, CPUs: #1 Apr 21 12:01:29.138200 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Apr 21 12:01:29.138216 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 21 12:01:29.138230 kernel: smp: Brought up 1 node, 2 CPUs Apr 21 12:01:29.138245 kernel: smpboot: Max logical packages: 1 Apr 21 12:01:29.138263 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Apr 21 12:01:29.139307 kernel: devtmpfs: initialized Apr 21 12:01:29.139335 kernel: x86/mm: Memory block size: 128MB Apr 21 12:01:29.139354 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Apr 21 12:01:29.139373 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 21 12:01:29.139390 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 21 12:01:29.139405 kernel: pinctrl core: initialized pinctrl subsystem Apr 21 12:01:29.139420 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 21 12:01:29.139435 kernel: audit: initializing netlink subsys (disabled) Apr 21 12:01:29.139453 kernel: audit: type=2000 audit(1776772888.030:1): state=initialized audit_enabled=0 res=1 Apr 21 12:01:29.139468 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 21 12:01:29.139483 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 21 12:01:29.139497 kernel: cpuidle: using governor menu Apr 21 12:01:29.139511 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 21 12:01:29.139525 kernel: dca service started, version 1.12.1 Apr 21 12:01:29.139540 kernel: e820: reserve RAM buffer [mem 0x0437e000-0x07ffffff] Apr 21 12:01:29.139553 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Apr 21 12:01:29.139568 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 21 12:01:29.139587 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 21 12:01:29.139602 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 21 12:01:29.139617 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 21 12:01:29.139633 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 21 12:01:29.139648 kernel: ACPI: Added _OSI(Module Device) Apr 21 12:01:29.139663 kernel: ACPI: Added _OSI(Processor Device) Apr 21 12:01:29.139678 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 21 12:01:29.139693 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 21 12:01:29.139711 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 21 12:01:29.139726 kernel: ACPI: Interpreter enabled Apr 21 12:01:29.139741 kernel: ACPI: PM: (supports S0 S5) Apr 21 12:01:29.139756 kernel: ACPI: Using IOAPIC for interrupt routing Apr 21 12:01:29.139772 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 21 12:01:29.139787 kernel: PCI: Ignoring E820 reservations for host bridge windows Apr 21 12:01:29.139802 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Apr 21 12:01:29.139817 kernel: iommu: Default domain type: Translated Apr 21 12:01:29.139832 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 21 12:01:29.139847 kernel: efivars: Registered efivars operations Apr 21 12:01:29.139865 kernel: PCI: Using ACPI for IRQ routing Apr 21 12:01:29.139880 kernel: PCI: System does not support PCI Apr 21 12:01:29.139895 kernel: vgaarb: loaded Apr 21 12:01:29.139910 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Apr 21 12:01:29.139925 kernel: VFS: Disk quotas dquot_6.6.0 Apr 21 12:01:29.139939 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 21 12:01:29.139954 kernel: pnp: PnP ACPI init Apr 21 12:01:29.139969 kernel: pnp: PnP ACPI: found 3 devices Apr 21 12:01:29.139985 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 21 12:01:29.140003 kernel: NET: Registered PF_INET protocol family Apr 21 12:01:29.140018 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 21 12:01:29.140033 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Apr 21 12:01:29.140049 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 21 12:01:29.140064 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 21 12:01:29.140079 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Apr 21 12:01:29.140095 kernel: TCP: Hash tables configured (established 65536 bind 65536) Apr 21 12:01:29.140110 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 21 12:01:29.140125 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 21 12:01:29.140143 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 21 12:01:29.140158 kernel: NET: Registered PF_XDP protocol family Apr 21 12:01:29.140173 kernel: PCI: CLS 0 bytes, default 64 Apr 21 12:01:29.140188 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 21 12:01:29.140205 kernel: software IO TLB: mapped [mem 0x000000003a878000-0x000000003e878000] (64MB) Apr 21 12:01:29.140220 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 21 12:01:29.140236 kernel: Initialise system trusted keyrings Apr 21 12:01:29.140251 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Apr 21 12:01:29.140269 kernel: Key type asymmetric registered Apr 21 12:01:29.141339 kernel: Asymmetric key parser 'x509' registered Apr 21 12:01:29.141358 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 21 12:01:29.141371 kernel: io scheduler mq-deadline registered Apr 21 12:01:29.141385 kernel: io scheduler kyber registered Apr 21 12:01:29.141399 kernel: io scheduler bfq registered Apr 21 12:01:29.141414 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 21 12:01:29.141428 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 21 12:01:29.141439 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 21 12:01:29.141451 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Apr 21 12:01:29.141469 kernel: i8042: PNP: No PS/2 controller found. Apr 21 12:01:29.141644 kernel: rtc_cmos 00:02: registered as rtc0 Apr 21 12:01:29.141757 kernel: rtc_cmos 00:02: setting system clock to 2026-04-21T12:01:28 UTC (1776772888) Apr 21 12:01:29.141854 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Apr 21 12:01:29.141869 kernel: intel_pstate: CPU model not supported Apr 21 12:01:29.141878 kernel: efifb: probing for efifb Apr 21 12:01:29.141891 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Apr 21 12:01:29.141907 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Apr 21 12:01:29.141916 kernel: efifb: scrolling: redraw Apr 21 12:01:29.141926 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 21 12:01:29.141937 kernel: Console: switching to colour frame buffer device 128x48 Apr 21 12:01:29.141945 kernel: fb0: EFI VGA frame buffer device Apr 21 12:01:29.141963 kernel: pstore: Using crash dump compression: deflate Apr 21 12:01:29.141971 kernel: pstore: Registered efi_pstore as persistent store backend Apr 21 12:01:29.141982 kernel: NET: Registered PF_INET6 protocol family Apr 21 12:01:29.141992 kernel: Segment Routing with IPv6 Apr 21 12:01:29.142005 kernel: In-situ OAM (IOAM) with IPv6 Apr 21 12:01:29.142015 kernel: NET: Registered PF_PACKET protocol family Apr 21 12:01:29.142025 kernel: Key type dns_resolver registered Apr 21 12:01:29.142037 kernel: IPI shorthand broadcast: enabled Apr 21 12:01:29.142045 kernel: sched_clock: Marking stable (937003800, 51069900)->(1211802200, -223728500) Apr 21 12:01:29.142053 kernel: registered taskstats version 1 Apr 21 12:01:29.142066 kernel: Loading compiled-in X.509 certificates Apr 21 12:01:29.142075 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: c59d945e31647ab89a50a01beeb265fbb707808b' Apr 21 12:01:29.142087 kernel: Key type .fscrypt registered Apr 21 12:01:29.142097 kernel: Key type fscrypt-provisioning registered Apr 21 12:01:29.142107 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 21 12:01:29.142119 kernel: ima: Allocated hash algorithm: sha1 Apr 21 12:01:29.142127 kernel: ima: No architecture policies found Apr 21 12:01:29.142135 kernel: clk: Disabling unused clocks Apr 21 12:01:29.142143 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 21 12:01:29.142151 kernel: Write protecting the kernel read-only data: 36864k Apr 21 12:01:29.142159 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 21 12:01:29.142168 kernel: Run /init as init process Apr 21 12:01:29.142178 kernel: with arguments: Apr 21 12:01:29.142186 kernel: /init Apr 21 12:01:29.142194 kernel: with environment: Apr 21 12:01:29.142203 kernel: HOME=/ Apr 21 12:01:29.142210 kernel: TERM=linux Apr 21 12:01:29.142221 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 12:01:29.142236 systemd[1]: Detected virtualization microsoft. Apr 21 12:01:29.142245 systemd[1]: Detected architecture x86-64. Apr 21 12:01:29.142256 systemd[1]: Running in initrd. Apr 21 12:01:29.142268 systemd[1]: No hostname configured, using default hostname. Apr 21 12:01:29.142290 systemd[1]: Hostname set to . Apr 21 12:01:29.142300 systemd[1]: Initializing machine ID from random generator. Apr 21 12:01:29.142312 systemd[1]: Queued start job for default target initrd.target. Apr 21 12:01:29.142322 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 12:01:29.142331 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 12:01:29.142340 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 21 12:01:29.142355 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 12:01:29.142365 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 21 12:01:29.142374 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 21 12:01:29.142389 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 21 12:01:29.142398 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 21 12:01:29.142410 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 12:01:29.142420 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 12:01:29.142431 systemd[1]: Reached target paths.target - Path Units. Apr 21 12:01:29.142442 systemd[1]: Reached target slices.target - Slice Units. Apr 21 12:01:29.142453 systemd[1]: Reached target swap.target - Swaps. Apr 21 12:01:29.142461 systemd[1]: Reached target timers.target - Timer Units. Apr 21 12:01:29.142473 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 12:01:29.142483 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 12:01:29.142496 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 21 12:01:29.142506 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 21 12:01:29.142515 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 12:01:29.142531 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 12:01:29.142541 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 12:01:29.142553 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 12:01:29.142561 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 21 12:01:29.142574 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 12:01:29.142583 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 21 12:01:29.142594 systemd[1]: Starting systemd-fsck-usr.service... Apr 21 12:01:29.142605 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 12:01:29.142616 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 12:01:29.142650 systemd-journald[177]: Collecting audit messages is disabled. Apr 21 12:01:29.142675 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 12:01:29.142685 systemd-journald[177]: Journal started Apr 21 12:01:29.142712 systemd-journald[177]: Runtime Journal (/run/log/journal/2ddc536794774acab5f6e5cda497381a) is 8.0M, max 158.7M, 150.7M free. Apr 21 12:01:29.164367 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 12:01:29.164903 systemd-modules-load[178]: Inserted module 'overlay' Apr 21 12:01:29.167800 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 21 12:01:29.174429 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 12:01:29.182326 systemd[1]: Finished systemd-fsck-usr.service. Apr 21 12:01:29.187303 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 12:01:29.201543 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 12:01:29.213443 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 12:01:29.217130 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 21 12:01:29.228469 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 12:01:29.243650 kernel: Bridge firewalling registered Apr 21 12:01:29.242576 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 12:01:29.250128 systemd-modules-load[178]: Inserted module 'br_netfilter' Apr 21 12:01:29.253717 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 12:01:29.265523 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 12:01:29.276464 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 12:01:29.280226 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 12:01:29.293573 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 12:01:29.301025 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 12:01:29.308816 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 12:01:29.320510 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 21 12:01:29.328478 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 12:01:29.339991 dracut-cmdline[214]: dracut-dracut-053 Apr 21 12:01:29.345320 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 12:01:29.395563 systemd-resolved[219]: Positive Trust Anchors: Apr 21 12:01:29.395583 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 12:01:29.395635 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 12:01:29.425214 systemd-resolved[219]: Defaulting to hostname 'linux'. Apr 21 12:01:29.430038 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 12:01:29.439310 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 12:01:29.448014 kernel: SCSI subsystem initialized Apr 21 12:01:29.458298 kernel: Loading iSCSI transport class v2.0-870. Apr 21 12:01:29.470304 kernel: iscsi: registered transport (tcp) Apr 21 12:01:29.491934 kernel: iscsi: registered transport (qla4xxx) Apr 21 12:01:29.492008 kernel: QLogic iSCSI HBA Driver Apr 21 12:01:29.530900 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 21 12:01:29.540538 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 21 12:01:29.568299 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 21 12:01:29.568370 kernel: device-mapper: uevent: version 1.0.3 Apr 21 12:01:29.570295 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 21 12:01:29.614306 kernel: raid6: avx512x4 gen() 18229 MB/s Apr 21 12:01:29.634295 kernel: raid6: avx512x2 gen() 18238 MB/s Apr 21 12:01:29.653288 kernel: raid6: avx512x1 gen() 18131 MB/s Apr 21 12:01:29.672288 kernel: raid6: avx2x4 gen() 18231 MB/s Apr 21 12:01:29.692295 kernel: raid6: avx2x2 gen() 18166 MB/s Apr 21 12:01:29.712934 kernel: raid6: avx2x1 gen() 13845 MB/s Apr 21 12:01:29.712964 kernel: raid6: using algorithm avx512x2 gen() 18238 MB/s Apr 21 12:01:29.734738 kernel: raid6: .... xor() 30514 MB/s, rmw enabled Apr 21 12:01:29.734765 kernel: raid6: using avx512x2 recovery algorithm Apr 21 12:01:29.758309 kernel: xor: automatically using best checksumming function avx Apr 21 12:01:29.907313 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 21 12:01:29.917489 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 21 12:01:29.929487 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 12:01:29.944821 systemd-udevd[399]: Using default interface naming scheme 'v255'. Apr 21 12:01:29.949670 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 12:01:29.970468 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 21 12:01:29.986252 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Apr 21 12:01:30.017716 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 12:01:30.027497 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 12:01:30.071172 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 12:01:30.088484 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 21 12:01:30.120180 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 21 12:01:30.130097 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 12:01:30.138866 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 12:01:30.146660 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 12:01:30.159538 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 21 12:01:30.187958 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 12:01:30.193293 kernel: cryptd: max_cpu_qlen set to 1000 Apr 21 12:01:30.195346 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 12:01:30.199772 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 12:01:30.203635 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 12:01:30.203819 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 12:01:30.207460 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 12:01:30.238270 kernel: AVX2 version of gcm_enc/dec engaged. Apr 21 12:01:30.245165 kernel: AES CTR mode by8 optimization enabled Apr 21 12:01:30.237909 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 12:01:30.244544 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 21 12:01:30.260252 kernel: hv_vmbus: Vmbus version:5.2 Apr 21 12:01:30.261003 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 12:01:30.265430 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 12:01:30.283528 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 12:01:30.306306 kernel: hv_vmbus: registering driver hyperv_keyboard Apr 21 12:01:30.324566 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 21 12:01:30.324651 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 21 12:01:30.329295 kernel: hv_vmbus: registering driver hv_netvsc Apr 21 12:01:30.329344 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Apr 21 12:01:30.337599 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 12:01:30.348454 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 12:01:30.363599 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 21 12:01:30.369318 kernel: PTP clock support registered Apr 21 12:01:30.395606 kernel: hv_utils: Registering HyperV Utility Driver Apr 21 12:01:30.395681 kernel: hv_vmbus: registering driver hv_utils Apr 21 12:01:30.395694 kernel: hv_vmbus: registering driver hid_hyperv Apr 21 12:01:30.398309 kernel: hv_vmbus: registering driver hv_storvsc Apr 21 12:01:30.404310 kernel: hv_utils: Heartbeat IC version 3.0 Apr 21 12:01:30.401514 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 12:01:30.417192 kernel: hv_utils: Shutdown IC version 3.2 Apr 21 12:01:30.417222 kernel: hv_utils: TimeSync IC version 4.0 Apr 21 12:01:31.457426 kernel: scsi host0: storvsc_host_t Apr 21 12:01:31.457713 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Apr 21 12:01:31.457731 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Apr 21 12:01:31.457793 systemd-resolved[219]: Clock change detected. Flushing caches. Apr 21 12:01:31.475197 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Apr 21 12:01:31.475465 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Apr 21 12:01:31.475708 kernel: scsi host1: storvsc_host_t Apr 21 12:01:31.495255 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Apr 21 12:01:31.496218 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 21 12:01:31.496257 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Apr 21 12:01:31.509523 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Apr 21 12:01:31.509884 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Apr 21 12:01:31.510074 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 21 12:01:31.515891 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Apr 21 12:01:31.516232 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Apr 21 12:01:31.527526 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#299 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 21 12:01:31.527773 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 12:01:31.530511 kernel: hv_netvsc 000d3ad9-878c-000d-3ad9-878c000d3ad9 eth0: VF slot 1 added Apr 21 12:01:31.535892 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 21 12:01:31.557529 kernel: hv_vmbus: registering driver hv_pci Apr 21 12:01:31.562513 kernel: hv_pci d62de3a2-5aeb-4dad-b259-d747d26d640f: PCI VMBus probing: Using version 0x10004 Apr 21 12:01:31.576683 kernel: hv_pci d62de3a2-5aeb-4dad-b259-d747d26d640f: PCI host bridge to bus 5aeb:00 Apr 21 12:01:31.576953 kernel: pci_bus 5aeb:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Apr 21 12:01:31.577105 kernel: pci_bus 5aeb:00: No busn resource found for root bus, will use [bus 00-ff] Apr 21 12:01:31.588368 kernel: pci 5aeb:00:02.0: [15b3:1016] type 00 class 0x020000 Apr 21 12:01:31.588464 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#273 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 21 12:01:31.588672 kernel: pci 5aeb:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Apr 21 12:01:31.594075 kernel: pci 5aeb:00:02.0: enabling Extended Tags Apr 21 12:01:31.612616 kernel: pci 5aeb:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 5aeb:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Apr 21 12:01:31.619643 kernel: pci_bus 5aeb:00: busn_res: [bus 00-ff] end is updated to 00 Apr 21 12:01:31.619984 kernel: pci 5aeb:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Apr 21 12:01:31.787410 kernel: mlx5_core 5aeb:00:02.0: enabling device (0000 -> 0002) Apr 21 12:01:31.792526 kernel: mlx5_core 5aeb:00:02.0: firmware version: 14.30.5026 Apr 21 12:01:32.005916 kernel: hv_netvsc 000d3ad9-878c-000d-3ad9-878c000d3ad9 eth0: VF registering: eth1 Apr 21 12:01:32.006300 kernel: mlx5_core 5aeb:00:02.0 eth1: joined to eth0 Apr 21 12:01:32.010523 kernel: mlx5_core 5aeb:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Apr 21 12:01:32.021542 kernel: mlx5_core 5aeb:00:02.0 enP23275s1: renamed from eth1 Apr 21 12:01:32.073971 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Apr 21 12:01:32.094625 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (467) Apr 21 12:01:32.110140 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 21 12:01:32.138320 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Apr 21 12:01:32.159522 kernel: BTRFS: device fsid 4627a20b-c3ad-458e-a05a-90623574a539 devid 1 transid 31 /dev/sda3 scanned by (udev-worker) (441) Apr 21 12:01:32.174592 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Apr 21 12:01:32.178749 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Apr 21 12:01:32.190793 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 21 12:01:33.222515 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 12:01:33.222985 disk-uuid[605]: The operation has completed successfully. Apr 21 12:01:33.308816 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 21 12:01:33.308940 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 21 12:01:33.337667 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 21 12:01:33.345356 sh[721]: Success Apr 21 12:01:33.377562 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 21 12:01:33.682009 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 21 12:01:33.698623 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 21 12:01:33.706839 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 21 12:01:33.727988 kernel: BTRFS info (device dm-0): first mount of filesystem 4627a20b-c3ad-458e-a05a-90623574a539 Apr 21 12:01:33.728065 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 21 12:01:33.732233 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 21 12:01:33.735601 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 21 12:01:33.738347 kernel: BTRFS info (device dm-0): using free space tree Apr 21 12:01:34.169795 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 21 12:01:34.176234 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 21 12:01:34.189686 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 21 12:01:34.196505 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 21 12:01:34.223184 kernel: BTRFS info (device sda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 12:01:34.223262 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 12:01:34.227217 kernel: BTRFS info (device sda6): using free space tree Apr 21 12:01:34.277523 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 12:01:34.289890 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 21 12:01:34.299763 kernel: BTRFS info (device sda6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 12:01:34.301453 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 21 12:01:34.307950 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 12:01:34.319666 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 21 12:01:34.327713 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 12:01:34.351691 systemd-networkd[905]: lo: Link UP Apr 21 12:01:34.351703 systemd-networkd[905]: lo: Gained carrier Apr 21 12:01:34.354035 systemd-networkd[905]: Enumeration completed Apr 21 12:01:34.354332 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 12:01:34.360177 systemd-networkd[905]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 12:01:34.360183 systemd-networkd[905]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 12:01:34.360287 systemd[1]: Reached target network.target - Network. Apr 21 12:01:34.421524 kernel: mlx5_core 5aeb:00:02.0 enP23275s1: Link up Apr 21 12:01:34.452523 kernel: hv_netvsc 000d3ad9-878c-000d-3ad9-878c000d3ad9 eth0: Data path switched to VF: enP23275s1 Apr 21 12:01:34.453033 systemd-networkd[905]: enP23275s1: Link UP Apr 21 12:01:34.453182 systemd-networkd[905]: eth0: Link UP Apr 21 12:01:34.453395 systemd-networkd[905]: eth0: Gained carrier Apr 21 12:01:34.453410 systemd-networkd[905]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 12:01:34.468799 systemd-networkd[905]: enP23275s1: Gained carrier Apr 21 12:01:34.484543 systemd-networkd[905]: eth0: DHCPv4 address 10.0.0.5/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 21 12:01:35.407415 ignition[904]: Ignition 2.19.0 Apr 21 12:01:35.407430 ignition[904]: Stage: fetch-offline Apr 21 12:01:35.407489 ignition[904]: no configs at "/usr/lib/ignition/base.d" Apr 21 12:01:35.407519 ignition[904]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 21 12:01:35.407658 ignition[904]: parsed url from cmdline: "" Apr 21 12:01:35.407663 ignition[904]: no config URL provided Apr 21 12:01:35.407671 ignition[904]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 12:01:35.407683 ignition[904]: no config at "/usr/lib/ignition/user.ign" Apr 21 12:01:35.407690 ignition[904]: failed to fetch config: resource requires networking Apr 21 12:01:35.407900 ignition[904]: Ignition finished successfully Apr 21 12:01:35.432302 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 12:01:35.442705 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 21 12:01:35.459951 ignition[914]: Ignition 2.19.0 Apr 21 12:01:35.459965 ignition[914]: Stage: fetch Apr 21 12:01:35.460193 ignition[914]: no configs at "/usr/lib/ignition/base.d" Apr 21 12:01:35.460207 ignition[914]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 21 12:01:35.460350 ignition[914]: parsed url from cmdline: "" Apr 21 12:01:35.460354 ignition[914]: no config URL provided Apr 21 12:01:35.460359 ignition[914]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 12:01:35.460368 ignition[914]: no config at "/usr/lib/ignition/user.ign" Apr 21 12:01:35.460387 ignition[914]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Apr 21 12:01:35.559771 ignition[914]: GET result: OK Apr 21 12:01:35.559914 ignition[914]: config has been read from IMDS userdata Apr 21 12:01:35.559949 ignition[914]: parsing config with SHA512: 41f14856ee7c6de4122623b0c495909a13efe270b09abfb242f0f9ad997c2bfab14ce3acdea8a9245d976f2d54cd8ad14410afce1966c374294f88018a3b8710 Apr 21 12:01:35.566397 unknown[914]: fetched base config from "system" Apr 21 12:01:35.566409 unknown[914]: fetched base config from "system" Apr 21 12:01:35.566965 ignition[914]: fetch: fetch complete Apr 21 12:01:35.566418 unknown[914]: fetched user config from "azure" Apr 21 12:01:35.566971 ignition[914]: fetch: fetch passed Apr 21 12:01:35.567017 ignition[914]: Ignition finished successfully Apr 21 12:01:35.580988 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 21 12:01:35.592706 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 21 12:01:35.609862 ignition[920]: Ignition 2.19.0 Apr 21 12:01:35.609875 ignition[920]: Stage: kargs Apr 21 12:01:35.613154 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 21 12:01:35.610118 ignition[920]: no configs at "/usr/lib/ignition/base.d" Apr 21 12:01:35.610132 ignition[920]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 21 12:01:35.611088 ignition[920]: kargs: kargs passed Apr 21 12:01:35.611143 ignition[920]: Ignition finished successfully Apr 21 12:01:35.628756 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 21 12:01:35.647614 ignition[926]: Ignition 2.19.0 Apr 21 12:01:35.647628 ignition[926]: Stage: disks Apr 21 12:01:35.651778 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 21 12:01:35.647865 ignition[926]: no configs at "/usr/lib/ignition/base.d" Apr 21 12:01:35.647877 ignition[926]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 21 12:01:35.650408 ignition[926]: disks: disks passed Apr 21 12:01:35.650457 ignition[926]: Ignition finished successfully Apr 21 12:01:35.666594 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 21 12:01:35.677370 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 21 12:01:35.681158 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 12:01:35.687803 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 12:01:35.691158 systemd[1]: Reached target basic.target - Basic System. Apr 21 12:01:35.703766 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 21 12:01:35.789233 systemd-fsck[934]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Apr 21 12:01:35.788882 systemd-networkd[905]: eth0: Gained IPv6LL Apr 21 12:01:35.796679 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 21 12:01:35.807714 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 21 12:01:35.906532 kernel: EXT4-fs (sda9): mounted filesystem fd5e5f40-ad85-46ea-abb5-3cc3d4cd8af5 r/w with ordered data mode. Quota mode: none. Apr 21 12:01:35.906749 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 21 12:01:35.907461 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 21 12:01:35.943597 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 12:01:35.963522 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (945) Apr 21 12:01:35.974525 kernel: BTRFS info (device sda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 12:01:35.974606 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 12:01:35.974634 kernel: BTRFS info (device sda6): using free space tree Apr 21 12:01:35.980691 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 21 12:01:35.987367 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 12:01:35.988468 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 21 12:01:35.995738 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 21 12:01:35.995780 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 12:01:36.009775 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 12:01:36.012507 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 21 12:01:36.025680 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 21 12:01:36.901427 coreos-metadata[962]: Apr 21 12:01:36.901 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 21 12:01:36.907578 coreos-metadata[962]: Apr 21 12:01:36.907 INFO Fetch successful Apr 21 12:01:36.910796 coreos-metadata[962]: Apr 21 12:01:36.910 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Apr 21 12:01:36.922520 coreos-metadata[962]: Apr 21 12:01:36.922 INFO Fetch successful Apr 21 12:01:36.939213 coreos-metadata[962]: Apr 21 12:01:36.939 INFO wrote hostname ci-4081.3.7-a-fffe528a55 to /sysroot/etc/hostname Apr 21 12:01:36.944106 initrd-setup-root[973]: cut: /sysroot/etc/passwd: No such file or directory Apr 21 12:01:36.944793 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 21 12:01:36.971370 initrd-setup-root[981]: cut: /sysroot/etc/group: No such file or directory Apr 21 12:01:36.979281 initrd-setup-root[988]: cut: /sysroot/etc/shadow: No such file or directory Apr 21 12:01:36.985721 initrd-setup-root[995]: cut: /sysroot/etc/gshadow: No such file or directory Apr 21 12:01:37.773843 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 21 12:01:37.789687 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 21 12:01:37.796696 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 21 12:01:37.809278 kernel: BTRFS info (device sda6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 12:01:37.810924 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 21 12:01:37.843543 ignition[1067]: INFO : Ignition 2.19.0 Apr 21 12:01:37.843543 ignition[1067]: INFO : Stage: mount Apr 21 12:01:37.843543 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 12:01:37.843543 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 21 12:01:37.862481 ignition[1067]: INFO : mount: mount passed Apr 21 12:01:37.862481 ignition[1067]: INFO : Ignition finished successfully Apr 21 12:01:37.848873 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 21 12:01:37.855804 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 21 12:01:37.874697 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 21 12:01:37.883030 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 12:01:37.906519 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1078) Apr 21 12:01:37.915048 kernel: BTRFS info (device sda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 12:01:37.915130 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 12:01:37.917898 kernel: BTRFS info (device sda6): using free space tree Apr 21 12:01:37.926525 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 12:01:37.927433 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 12:01:37.957387 ignition[1095]: INFO : Ignition 2.19.0 Apr 21 12:01:37.960133 ignition[1095]: INFO : Stage: files Apr 21 12:01:37.962407 ignition[1095]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 12:01:37.965600 ignition[1095]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 21 12:01:37.965600 ignition[1095]: DEBUG : files: compiled without relabeling support, skipping Apr 21 12:01:37.973454 ignition[1095]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 21 12:01:37.977826 ignition[1095]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 21 12:01:38.147236 ignition[1095]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 21 12:01:38.151568 ignition[1095]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 21 12:01:38.156038 unknown[1095]: wrote ssh authorized keys file for user: core Apr 21 12:01:38.159341 ignition[1095]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 21 12:01:38.190943 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 21 12:01:38.196939 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 21 12:01:38.196939 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 12:01:38.196939 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 21 12:01:38.248269 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 21 12:01:38.290633 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 12:01:38.297007 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 21 12:01:38.297007 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 21 12:01:38.297007 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 21 12:01:38.313549 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 21 12:01:38.313549 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 12:01:38.313549 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 12:01:38.313549 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 12:01:38.313549 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 12:01:38.313549 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 12:01:38.313549 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 12:01:38.313549 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 12:01:38.313549 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 12:01:38.313549 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 12:01:38.313549 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 21 12:01:38.673812 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 21 12:01:39.029284 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 12:01:39.029284 ignition[1095]: INFO : files: op(c): [started] processing unit "containerd.service" Apr 21 12:01:39.052262 ignition[1095]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 21 12:01:39.060155 ignition[1095]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 21 12:01:39.060155 ignition[1095]: INFO : files: op(c): [finished] processing unit "containerd.service" Apr 21 12:01:39.060155 ignition[1095]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Apr 21 12:01:39.079533 ignition[1095]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 12:01:39.079533 ignition[1095]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 12:01:39.079533 ignition[1095]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Apr 21 12:01:39.079533 ignition[1095]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Apr 21 12:01:39.079533 ignition[1095]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Apr 21 12:01:39.079533 ignition[1095]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 21 12:01:39.079533 ignition[1095]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 21 12:01:39.079533 ignition[1095]: INFO : files: files passed Apr 21 12:01:39.079533 ignition[1095]: INFO : Ignition finished successfully Apr 21 12:01:39.073465 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 21 12:01:39.128739 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 21 12:01:39.138866 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 21 12:01:39.153879 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 21 12:01:39.154003 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 21 12:01:39.169485 initrd-setup-root-after-ignition[1126]: grep: Apr 21 12:01:39.169485 initrd-setup-root-after-ignition[1123]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 12:01:39.169485 initrd-setup-root-after-ignition[1123]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 21 12:01:39.180545 initrd-setup-root-after-ignition[1126]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 12:01:39.185571 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 12:01:39.185920 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 21 12:01:39.196737 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 21 12:01:39.224621 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 21 12:01:39.226540 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 21 12:01:39.234565 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 21 12:01:39.240372 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 21 12:01:39.243448 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 21 12:01:39.251743 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 21 12:01:39.270177 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 12:01:39.280698 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 21 12:01:39.293980 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 21 12:01:39.300618 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 12:01:39.307262 systemd[1]: Stopped target timers.target - Timer Units. Apr 21 12:01:39.312299 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 21 12:01:39.312446 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 12:01:39.316563 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 21 12:01:39.318665 systemd[1]: Stopped target basic.target - Basic System. Apr 21 12:01:39.319092 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 21 12:01:39.319572 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 12:01:39.320047 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 21 12:01:39.320965 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 21 12:01:39.339899 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 12:01:39.342760 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 21 12:01:39.343212 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 21 12:01:39.343700 systemd[1]: Stopped target swap.target - Swaps. Apr 21 12:01:39.344145 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 21 12:01:39.344274 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 21 12:01:39.349244 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 21 12:01:39.349733 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 12:01:39.350148 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 21 12:01:39.385286 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 12:01:39.389254 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 21 12:01:39.389405 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 21 12:01:39.400565 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 21 12:01:39.400799 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 12:01:39.407274 systemd[1]: ignition-files.service: Deactivated successfully. Apr 21 12:01:39.407398 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 21 12:01:39.415828 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 21 12:01:39.415946 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 21 12:01:39.450011 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 21 12:01:39.450171 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 21 12:01:39.452638 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 12:01:39.466218 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 21 12:01:39.469548 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 21 12:01:39.475431 ignition[1147]: INFO : Ignition 2.19.0 Apr 21 12:01:39.475431 ignition[1147]: INFO : Stage: umount Apr 21 12:01:39.475431 ignition[1147]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 12:01:39.475431 ignition[1147]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 21 12:01:39.475431 ignition[1147]: INFO : umount: umount passed Apr 21 12:01:39.475431 ignition[1147]: INFO : Ignition finished successfully Apr 21 12:01:39.469784 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 12:01:39.481179 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 21 12:01:39.481321 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 12:01:39.490062 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 21 12:01:39.490168 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 21 12:01:39.497772 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 21 12:01:39.497880 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 21 12:01:39.507756 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 21 12:01:39.507883 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 21 12:01:39.513358 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 21 12:01:39.513417 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 21 12:01:39.522169 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 21 12:01:39.522229 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 21 12:01:39.552729 systemd[1]: Stopped target network.target - Network. Apr 21 12:01:39.555404 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 21 12:01:39.555516 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 12:01:39.559092 systemd[1]: Stopped target paths.target - Path Units. Apr 21 12:01:39.561773 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 21 12:01:39.564541 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 12:01:39.570586 systemd[1]: Stopped target slices.target - Slice Units. Apr 21 12:01:39.573302 systemd[1]: Stopped target sockets.target - Socket Units. Apr 21 12:01:39.576060 systemd[1]: iscsid.socket: Deactivated successfully. Apr 21 12:01:39.576122 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 12:01:39.579201 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 21 12:01:39.579252 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 12:01:39.590230 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 21 12:01:39.590303 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 21 12:01:39.616356 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 21 12:01:39.616449 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 21 12:01:39.625104 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 21 12:01:39.630849 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 21 12:01:39.634547 systemd-networkd[905]: eth0: DHCPv6 lease lost Apr 21 12:01:39.638991 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 21 12:01:39.642086 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 21 12:01:39.644648 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 21 12:01:39.651116 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 21 12:01:39.651180 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 21 12:01:39.664617 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 21 12:01:39.670159 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 21 12:01:39.670241 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 12:01:39.680864 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 12:01:39.682349 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 21 12:01:39.682476 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 21 12:01:39.704741 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 21 12:01:39.704827 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 21 12:01:39.708521 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 21 12:01:39.708590 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 21 12:01:39.712404 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 21 12:01:39.712461 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 12:01:39.716948 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 21 12:01:39.717102 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 12:01:39.729215 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 21 12:01:39.729330 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 21 12:01:39.733990 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 21 12:01:39.734039 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 12:01:39.734426 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 21 12:01:39.734472 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 21 12:01:39.735428 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 21 12:01:39.735479 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 21 12:01:39.779515 kernel: hv_netvsc 000d3ad9-878c-000d-3ad9-878c000d3ad9 eth0: Data path switched from VF: enP23275s1 Apr 21 12:01:39.779668 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 12:01:39.779768 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 12:01:39.793681 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 21 12:01:39.797538 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 21 12:01:39.797608 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 12:01:39.806989 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 12:01:39.807057 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 12:01:39.820058 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 21 12:01:39.820209 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 21 12:01:39.825790 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 21 12:01:39.825887 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 21 12:01:40.366687 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 21 12:01:40.366835 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 21 12:01:40.371117 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 21 12:01:40.377038 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 21 12:01:40.377120 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 21 12:01:40.393771 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 21 12:01:40.549005 systemd[1]: Switching root. Apr 21 12:01:40.582982 systemd-journald[177]: Journal stopped Apr 21 12:01:45.978656 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Apr 21 12:01:45.978716 kernel: SELinux: policy capability network_peer_controls=1 Apr 21 12:01:45.978740 kernel: SELinux: policy capability open_perms=1 Apr 21 12:01:45.978756 kernel: SELinux: policy capability extended_socket_class=1 Apr 21 12:01:45.978775 kernel: SELinux: policy capability always_check_network=0 Apr 21 12:01:45.978791 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 21 12:01:45.978812 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 21 12:01:45.978830 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 21 12:01:45.978851 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 21 12:01:45.978868 kernel: audit: type=1403 audit(1776772902.482:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 21 12:01:45.978887 systemd[1]: Successfully loaded SELinux policy in 135.908ms. Apr 21 12:01:45.978909 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.920ms. Apr 21 12:01:45.978929 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 12:01:45.978946 systemd[1]: Detected virtualization microsoft. Apr 21 12:01:45.978970 systemd[1]: Detected architecture x86-64. Apr 21 12:01:45.978992 systemd[1]: Detected first boot. Apr 21 12:01:45.979011 systemd[1]: Hostname set to . Apr 21 12:01:45.979031 systemd[1]: Initializing machine ID from random generator. Apr 21 12:01:45.979050 zram_generator::config[1207]: No configuration found. Apr 21 12:01:45.979076 systemd[1]: Populated /etc with preset unit settings. Apr 21 12:01:45.979096 systemd[1]: Queued start job for default target multi-user.target. Apr 21 12:01:45.979115 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 21 12:01:45.979136 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 21 12:01:45.979157 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 21 12:01:45.979178 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 21 12:01:45.979199 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 21 12:01:45.979225 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 21 12:01:45.979245 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 21 12:01:45.979266 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 21 12:01:45.979287 systemd[1]: Created slice user.slice - User and Session Slice. Apr 21 12:01:45.979307 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 12:01:45.979327 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 12:01:45.979349 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 21 12:01:45.979370 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 21 12:01:45.979394 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 21 12:01:45.979415 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 12:01:45.979436 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 21 12:01:45.979456 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 12:01:45.979476 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 21 12:01:45.983878 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 12:01:45.983921 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 12:01:45.983941 systemd[1]: Reached target slices.target - Slice Units. Apr 21 12:01:45.983960 systemd[1]: Reached target swap.target - Swaps. Apr 21 12:01:45.983982 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 21 12:01:45.984001 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 21 12:01:45.984020 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 21 12:01:45.984038 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 21 12:01:45.984057 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 12:01:45.984076 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 12:01:45.984094 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 12:01:45.984117 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 21 12:01:45.984136 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 21 12:01:45.984156 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 21 12:01:45.984175 systemd[1]: Mounting media.mount - External Media Directory... Apr 21 12:01:45.984194 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 12:01:45.984218 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 21 12:01:45.984237 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 21 12:01:45.984255 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 21 12:01:45.984274 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 21 12:01:45.984293 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 12:01:45.984312 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 12:01:45.984333 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 21 12:01:45.984351 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 12:01:45.984374 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 12:01:45.984393 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 12:01:45.984412 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 21 12:01:45.984432 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 12:01:45.984451 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 21 12:01:45.984471 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 21 12:01:45.984501 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 21 12:01:45.984520 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 12:01:45.984539 kernel: fuse: init (API version 7.39) Apr 21 12:01:45.984554 kernel: loop: module loaded Apr 21 12:01:45.984571 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 12:01:45.984590 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 21 12:01:45.984609 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 21 12:01:45.984628 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 12:01:45.984647 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 12:01:45.984667 kernel: ACPI: bus type drm_connector registered Apr 21 12:01:45.984688 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 21 12:01:45.984707 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 21 12:01:45.984726 systemd[1]: Mounted media.mount - External Media Directory. Apr 21 12:01:45.984745 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 21 12:01:45.984764 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 21 12:01:45.984783 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 21 12:01:45.984801 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 21 12:01:45.984821 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 12:01:45.984840 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 21 12:01:45.984861 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 21 12:01:45.984919 systemd-journald[1314]: Collecting audit messages is disabled. Apr 21 12:01:45.984959 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 12:01:45.984978 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 12:01:45.985001 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 12:01:45.985020 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 12:01:45.985038 systemd-journald[1314]: Journal started Apr 21 12:01:45.985075 systemd-journald[1314]: Runtime Journal (/run/log/journal/25ee1a7bf82e4d34bddfd0d6c1af0a0e) is 8.0M, max 158.7M, 150.7M free. Apr 21 12:01:45.995512 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 12:01:45.999273 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 12:01:45.999703 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 12:01:46.003939 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 21 12:01:46.004126 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 21 12:01:46.007704 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 12:01:46.007901 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 12:01:46.011462 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 12:01:46.016127 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 21 12:01:46.020437 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 21 12:01:46.024730 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 12:01:46.038450 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 21 12:01:46.048587 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 21 12:01:46.055633 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 21 12:01:46.059784 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 21 12:01:46.062793 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 21 12:01:46.072548 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 21 12:01:46.076686 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 12:01:46.078381 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 21 12:01:46.082326 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 12:01:46.084185 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 12:01:46.094987 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 12:01:46.105695 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 21 12:01:46.121896 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 21 12:01:46.126057 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 21 12:01:46.130597 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 21 12:01:46.141143 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 21 12:01:46.145181 udevadm[1370]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 21 12:01:46.161363 systemd-journald[1314]: Time spent on flushing to /var/log/journal/25ee1a7bf82e4d34bddfd0d6c1af0a0e is 16.936ms for 946 entries. Apr 21 12:01:46.161363 systemd-journald[1314]: System Journal (/var/log/journal/25ee1a7bf82e4d34bddfd0d6c1af0a0e) is 8.0M, max 2.6G, 2.6G free. Apr 21 12:01:46.206488 systemd-journald[1314]: Received client request to flush runtime journal. Apr 21 12:01:46.208882 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 21 12:01:46.223969 systemd-tmpfiles[1368]: ACLs are not supported, ignoring. Apr 21 12:01:46.223998 systemd-tmpfiles[1368]: ACLs are not supported, ignoring. Apr 21 12:01:46.226969 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 12:01:46.236651 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 12:01:46.249770 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 21 12:01:46.405755 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 21 12:01:46.417695 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 12:01:46.433791 systemd-tmpfiles[1388]: ACLs are not supported, ignoring. Apr 21 12:01:46.433818 systemd-tmpfiles[1388]: ACLs are not supported, ignoring. Apr 21 12:01:46.438780 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 12:01:47.075212 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 21 12:01:47.091692 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 12:01:47.114599 systemd-udevd[1394]: Using default interface naming scheme 'v255'. Apr 21 12:01:47.415904 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 12:01:47.431670 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 12:01:47.503589 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 21 12:01:47.544332 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 21 12:01:47.590522 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#60 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 21 12:01:47.614516 kernel: mousedev: PS/2 mouse device common for all mice Apr 21 12:01:47.645592 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 21 12:01:47.677521 kernel: hv_vmbus: registering driver hv_balloon Apr 21 12:01:47.681520 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Apr 21 12:01:47.703514 kernel: hv_vmbus: registering driver hyperv_fb Apr 21 12:01:47.714711 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Apr 21 12:01:47.714810 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Apr 21 12:01:47.722758 kernel: Console: switching to colour dummy device 80x25 Apr 21 12:01:47.730095 kernel: Console: switching to colour frame buffer device 128x48 Apr 21 12:01:47.771553 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1408) Apr 21 12:01:47.903209 systemd-networkd[1402]: lo: Link UP Apr 21 12:01:47.903220 systemd-networkd[1402]: lo: Gained carrier Apr 21 12:01:47.914219 systemd-networkd[1402]: Enumeration completed Apr 21 12:01:47.918005 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 12:01:47.919781 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 12:01:47.919786 systemd-networkd[1402]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 12:01:47.922725 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 12:01:47.953620 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 21 12:01:47.961329 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 12:01:47.965026 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 12:01:48.013516 kernel: mlx5_core 5aeb:00:02.0 enP23275s1: Link up Apr 21 12:01:48.041603 kernel: hv_netvsc 000d3ad9-878c-000d-3ad9-878c000d3ad9 eth0: Data path switched to VF: enP23275s1 Apr 21 12:01:48.046026 systemd-networkd[1402]: enP23275s1: Link UP Apr 21 12:01:48.046324 systemd-networkd[1402]: eth0: Link UP Apr 21 12:01:48.046557 systemd-networkd[1402]: eth0: Gained carrier Apr 21 12:01:48.046677 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 12:01:48.051827 systemd-networkd[1402]: enP23275s1: Gained carrier Apr 21 12:01:48.058970 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 21 12:01:48.077095 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 12:01:48.096611 systemd-networkd[1402]: eth0: DHCPv4 address 10.0.0.5/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 21 12:01:48.164558 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Apr 21 12:01:48.220395 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 21 12:01:48.230834 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 21 12:01:48.381677 lvm[1485]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 12:01:48.413846 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 21 12:01:48.418636 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 12:01:48.427705 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 21 12:01:48.434279 lvm[1488]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 12:01:48.461883 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 21 12:01:48.466322 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 21 12:01:48.470286 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 21 12:01:48.470326 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 12:01:48.470418 systemd[1]: Reached target machines.target - Containers. Apr 21 12:01:48.472737 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 21 12:01:48.482804 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 21 12:01:48.487747 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 21 12:01:48.488069 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 12:01:48.492669 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 21 12:01:48.498818 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 21 12:01:48.514299 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 21 12:01:48.519262 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 21 12:01:48.562025 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 21 12:01:48.584621 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 21 12:01:48.586032 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 21 12:01:48.611833 kernel: loop0: detected capacity change from 0 to 142488 Apr 21 12:01:48.715307 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 12:01:49.155528 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 21 12:01:49.219529 kernel: loop1: detected capacity change from 0 to 140768 Apr 21 12:01:49.753524 kernel: loop2: detected capacity change from 0 to 228704 Apr 21 12:01:49.813524 kernel: loop3: detected capacity change from 0 to 31056 Apr 21 12:01:49.996788 systemd-networkd[1402]: eth0: Gained IPv6LL Apr 21 12:01:49.999726 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 21 12:01:50.258514 kernel: loop4: detected capacity change from 0 to 142488 Apr 21 12:01:50.281517 kernel: loop5: detected capacity change from 0 to 140768 Apr 21 12:01:50.301518 kernel: loop6: detected capacity change from 0 to 228704 Apr 21 12:01:50.322583 kernel: loop7: detected capacity change from 0 to 31056 Apr 21 12:01:50.334637 (sd-merge)[1515]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Apr 21 12:01:50.335899 (sd-merge)[1515]: Merged extensions into '/usr'. Apr 21 12:01:50.339970 systemd[1]: Reloading requested from client PID 1496 ('systemd-sysext') (unit systemd-sysext.service)... Apr 21 12:01:50.340116 systemd[1]: Reloading... Apr 21 12:01:50.393637 zram_generator::config[1539]: No configuration found. Apr 21 12:01:50.562043 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 12:01:50.637898 systemd[1]: Reloading finished in 297 ms. Apr 21 12:01:50.655953 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 21 12:01:50.667663 systemd[1]: Starting ensure-sysext.service... Apr 21 12:01:50.673708 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 12:01:50.685591 systemd[1]: Reloading requested from client PID 1606 ('systemctl') (unit ensure-sysext.service)... Apr 21 12:01:50.685615 systemd[1]: Reloading... Apr 21 12:01:50.709018 systemd-tmpfiles[1607]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 21 12:01:50.709602 systemd-tmpfiles[1607]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 21 12:01:50.710935 systemd-tmpfiles[1607]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 21 12:01:50.711379 systemd-tmpfiles[1607]: ACLs are not supported, ignoring. Apr 21 12:01:50.711466 systemd-tmpfiles[1607]: ACLs are not supported, ignoring. Apr 21 12:01:50.754220 systemd-tmpfiles[1607]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 12:01:50.754240 systemd-tmpfiles[1607]: Skipping /boot Apr 21 12:01:50.769014 systemd-tmpfiles[1607]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 12:01:50.769035 systemd-tmpfiles[1607]: Skipping /boot Apr 21 12:01:50.786042 zram_generator::config[1638]: No configuration found. Apr 21 12:01:50.941112 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 12:01:51.021021 systemd[1]: Reloading finished in 334 ms. Apr 21 12:01:51.038997 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 12:01:51.066726 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 12:01:51.072054 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 21 12:01:51.080663 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 21 12:01:51.091134 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 12:01:51.102735 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 21 12:01:51.114958 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 12:01:51.115456 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 12:01:51.130908 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 12:01:51.137812 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 12:01:51.154857 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 12:01:51.158946 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 12:01:51.159122 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 12:01:51.160308 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 12:01:51.166676 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 12:01:51.182724 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 12:01:51.182957 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 12:01:51.191401 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 12:01:51.193777 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 12:01:51.200832 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 12:01:51.207634 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 12:01:51.207902 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 12:01:51.212057 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 12:01:51.212285 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 12:01:51.221233 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 12:01:51.223117 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 12:01:51.231445 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 21 12:01:51.243228 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 12:01:51.243836 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 12:01:51.250015 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 12:01:51.261251 systemd-resolved[1706]: Positive Trust Anchors: Apr 21 12:01:51.261270 systemd-resolved[1706]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 12:01:51.261316 systemd-resolved[1706]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 12:01:51.261799 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 12:01:51.268802 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 12:01:51.287823 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 12:01:51.291272 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 12:01:51.291599 systemd[1]: Reached target time-set.target - System Time Set. Apr 21 12:01:51.295014 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 12:01:51.300143 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 12:01:51.300367 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 12:01:51.305765 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 12:01:51.305994 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 12:01:51.310187 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 12:01:51.310413 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 12:01:51.314783 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 12:01:51.315228 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 12:01:51.327893 systemd[1]: Finished ensure-sysext.service. Apr 21 12:01:51.332697 augenrules[1745]: No rules Apr 21 12:01:51.334754 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 12:01:51.339393 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 12:01:51.340195 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 12:01:51.351827 systemd-resolved[1706]: Using system hostname 'ci-4081.3.7-a-fffe528a55'. Apr 21 12:01:51.353956 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 12:01:51.357604 systemd[1]: Reached target network.target - Network. Apr 21 12:01:51.360507 systemd[1]: Reached target network-online.target - Network is Online. Apr 21 12:01:51.364146 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 12:01:51.368546 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 21 12:01:52.118161 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 21 12:01:52.122804 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 21 12:01:55.497865 ldconfig[1492]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 21 12:01:55.512980 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 21 12:01:55.528852 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 21 12:01:55.542729 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 21 12:01:55.547064 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 12:01:55.550753 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 21 12:01:55.554846 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 21 12:01:55.559285 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 21 12:01:55.562860 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 21 12:01:55.566965 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 21 12:01:55.570920 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 21 12:01:55.570956 systemd[1]: Reached target paths.target - Path Units. Apr 21 12:01:55.573962 systemd[1]: Reached target timers.target - Timer Units. Apr 21 12:01:55.577922 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 21 12:01:55.583235 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 21 12:01:55.588107 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 21 12:01:55.594574 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 21 12:01:55.598566 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 12:01:55.601959 systemd[1]: Reached target basic.target - Basic System. Apr 21 12:01:55.605038 systemd[1]: System is tainted: cgroupsv1 Apr 21 12:01:55.605107 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 21 12:01:55.605148 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 21 12:01:55.608038 systemd[1]: Starting chronyd.service - NTP client/server... Apr 21 12:01:55.613735 systemd[1]: Starting containerd.service - containerd container runtime... Apr 21 12:01:55.621687 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 21 12:01:55.639759 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 21 12:01:55.646664 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 21 12:01:55.651665 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 21 12:01:55.655231 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 21 12:01:55.655289 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Apr 21 12:01:55.661755 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Apr 21 12:01:55.666841 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Apr 21 12:01:55.670060 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 12:01:55.686787 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 21 12:01:55.696452 jq[1776]: false Apr 21 12:01:55.703628 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 21 12:01:55.713623 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 21 12:01:55.717051 (chronyd)[1771]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Apr 21 12:01:55.733657 extend-filesystems[1779]: Found loop4 Apr 21 12:01:55.734754 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 21 12:01:55.749325 extend-filesystems[1779]: Found loop5 Apr 21 12:01:55.749325 extend-filesystems[1779]: Found loop6 Apr 21 12:01:55.749325 extend-filesystems[1779]: Found loop7 Apr 21 12:01:55.749325 extend-filesystems[1779]: Found sda Apr 21 12:01:55.749325 extend-filesystems[1779]: Found sda1 Apr 21 12:01:55.749325 extend-filesystems[1779]: Found sda2 Apr 21 12:01:55.749325 extend-filesystems[1779]: Found sda3 Apr 21 12:01:55.749325 extend-filesystems[1779]: Found usr Apr 21 12:01:55.749325 extend-filesystems[1779]: Found sda4 Apr 21 12:01:55.749325 extend-filesystems[1779]: Found sda6 Apr 21 12:01:55.749325 extend-filesystems[1779]: Found sda7 Apr 21 12:01:55.749325 extend-filesystems[1779]: Found sda9 Apr 21 12:01:55.749325 extend-filesystems[1779]: Checking size of /dev/sda9 Apr 21 12:01:55.742871 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 21 12:01:55.785125 chronyd[1799]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Apr 21 12:01:55.765704 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 21 12:01:55.813469 chronyd[1799]: Timezone right/UTC failed leap second check, ignoring Apr 21 12:01:55.779819 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 21 12:01:55.813758 chronyd[1799]: Loaded seccomp filter (level 2) Apr 21 12:01:55.784400 systemd[1]: Starting update-engine.service - Update Engine... Apr 21 12:01:55.847906 extend-filesystems[1779]: Old size kept for /dev/sda9 Apr 21 12:01:55.847906 extend-filesystems[1779]: Found sr0 Apr 21 12:01:55.895147 kernel: hv_utils: KVP IC version 4.0 Apr 21 12:01:55.805669 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 21 12:01:55.856661 KVP[1780]: KVP starting; pid is:1780 Apr 21 12:01:55.830157 systemd[1]: Started chronyd.service - NTP client/server. Apr 21 12:01:55.897017 jq[1804]: true Apr 21 12:01:55.893614 KVP[1780]: KVP LIC Version: 3.1 Apr 21 12:01:55.851667 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 21 12:01:55.852016 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 21 12:01:55.852427 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 21 12:01:55.852746 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 21 12:01:55.878928 systemd[1]: motdgen.service: Deactivated successfully. Apr 21 12:01:55.879257 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 21 12:01:55.905973 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 21 12:01:55.906284 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 21 12:01:55.914943 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 21 12:01:55.951785 (ntainerd)[1827]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 21 12:01:55.955379 dbus-daemon[1775]: [system] SELinux support is enabled Apr 21 12:01:55.957104 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 21 12:01:55.991431 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 21 12:01:55.991697 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 21 12:01:55.995949 jq[1826]: true Apr 21 12:01:56.000896 update_engine[1797]: I20260421 12:01:56.000741 1797 main.cc:92] Flatcar Update Engine starting Apr 21 12:01:56.002942 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 21 12:01:56.002985 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 21 12:01:56.028454 systemd[1]: Started update-engine.service - Update Engine. Apr 21 12:01:56.035995 update_engine[1797]: I20260421 12:01:56.032485 1797 update_check_scheduler.cc:74] Next update check in 7m58s Apr 21 12:01:56.037311 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1848) Apr 21 12:01:56.038618 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 21 12:01:56.047697 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 21 12:01:56.063296 tar[1821]: linux-amd64/LICENSE Apr 21 12:01:56.063296 tar[1821]: linux-amd64/helm Apr 21 12:01:56.113912 systemd-logind[1793]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 21 12:01:56.118098 systemd-logind[1793]: New seat seat0. Apr 21 12:01:56.123990 systemd[1]: Started systemd-logind.service - User Login Management. Apr 21 12:01:56.134093 coreos-metadata[1773]: Apr 21 12:01:56.133 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 21 12:01:56.144707 coreos-metadata[1773]: Apr 21 12:01:56.144 INFO Fetch successful Apr 21 12:01:56.144837 coreos-metadata[1773]: Apr 21 12:01:56.144 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Apr 21 12:01:56.150332 coreos-metadata[1773]: Apr 21 12:01:56.150 INFO Fetch successful Apr 21 12:01:56.150332 coreos-metadata[1773]: Apr 21 12:01:56.150 INFO Fetching http://168.63.129.16/machine/59ff4e9d-f50c-45d0-b07a-5acb4e5fd608/01f8382a%2D2662%2D4205%2D9620%2D31a7dae1c4eb.%5Fci%2D4081.3.7%2Da%2Dfffe528a55?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Apr 21 12:01:56.154979 coreos-metadata[1773]: Apr 21 12:01:56.153 INFO Fetch successful Apr 21 12:01:56.154979 coreos-metadata[1773]: Apr 21 12:01:56.153 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Apr 21 12:01:56.169921 coreos-metadata[1773]: Apr 21 12:01:56.169 INFO Fetch successful Apr 21 12:01:56.226738 bash[1889]: Updated "/home/core/.ssh/authorized_keys" Apr 21 12:01:56.230716 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 21 12:01:56.238247 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 21 12:01:56.281113 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 21 12:01:56.285921 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 21 12:01:56.423010 locksmithd[1857]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 21 12:01:56.667103 sshd_keygen[1822]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 21 12:01:56.704648 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 21 12:01:56.717965 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 21 12:01:56.732638 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Apr 21 12:01:56.761891 systemd[1]: issuegen.service: Deactivated successfully. Apr 21 12:01:56.762213 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 21 12:01:56.774786 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 21 12:01:56.798725 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Apr 21 12:01:56.817445 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 21 12:01:56.829975 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 21 12:01:56.842046 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 21 12:01:56.848873 systemd[1]: Reached target getty.target - Login Prompts. Apr 21 12:01:57.064777 tar[1821]: linux-amd64/README.md Apr 21 12:01:57.085014 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 21 12:01:57.246034 containerd[1827]: time="2026-04-21T12:01:57.237467800Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 21 12:01:57.273328 containerd[1827]: time="2026-04-21T12:01:57.273267300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 21 12:01:57.275176 containerd[1827]: time="2026-04-21T12:01:57.275133000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 21 12:01:57.275176 containerd[1827]: time="2026-04-21T12:01:57.275168500Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 21 12:01:57.275349 containerd[1827]: time="2026-04-21T12:01:57.275188000Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 21 12:01:57.276118 containerd[1827]: time="2026-04-21T12:01:57.275380200Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 21 12:01:57.276118 containerd[1827]: time="2026-04-21T12:01:57.275412900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 21 12:01:57.276118 containerd[1827]: time="2026-04-21T12:01:57.275510100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 12:01:57.276118 containerd[1827]: time="2026-04-21T12:01:57.275527300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 21 12:01:57.276118 containerd[1827]: time="2026-04-21T12:01:57.275810700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 12:01:57.276118 containerd[1827]: time="2026-04-21T12:01:57.275830800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 21 12:01:57.276118 containerd[1827]: time="2026-04-21T12:01:57.275850500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 12:01:57.276118 containerd[1827]: time="2026-04-21T12:01:57.275866100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 21 12:01:57.276118 containerd[1827]: time="2026-04-21T12:01:57.275964000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 21 12:01:57.276479 containerd[1827]: time="2026-04-21T12:01:57.276212800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 21 12:01:57.276479 containerd[1827]: time="2026-04-21T12:01:57.276439300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 12:01:57.276479 containerd[1827]: time="2026-04-21T12:01:57.276462000Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 21 12:01:57.276620 containerd[1827]: time="2026-04-21T12:01:57.276603900Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 21 12:01:57.277056 containerd[1827]: time="2026-04-21T12:01:57.276673200Z" level=info msg="metadata content store policy set" policy=shared Apr 21 12:01:57.304526 containerd[1827]: time="2026-04-21T12:01:57.303292600Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 21 12:01:57.304526 containerd[1827]: time="2026-04-21T12:01:57.303370500Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 21 12:01:57.304526 containerd[1827]: time="2026-04-21T12:01:57.303394500Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 21 12:01:57.304526 containerd[1827]: time="2026-04-21T12:01:57.303416400Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 21 12:01:57.304526 containerd[1827]: time="2026-04-21T12:01:57.303438300Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 21 12:01:57.304526 containerd[1827]: time="2026-04-21T12:01:57.303640700Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 21 12:01:57.304526 containerd[1827]: time="2026-04-21T12:01:57.304112700Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 21 12:01:57.304526 containerd[1827]: time="2026-04-21T12:01:57.304249900Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 21 12:01:57.304526 containerd[1827]: time="2026-04-21T12:01:57.304274300Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 21 12:01:57.304526 containerd[1827]: time="2026-04-21T12:01:57.304291000Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 21 12:01:57.304526 containerd[1827]: time="2026-04-21T12:01:57.304313400Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 21 12:01:57.304526 containerd[1827]: time="2026-04-21T12:01:57.304334900Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 21 12:01:57.304526 containerd[1827]: time="2026-04-21T12:01:57.304365800Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 21 12:01:57.304526 containerd[1827]: time="2026-04-21T12:01:57.304390800Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 21 12:01:57.305109 containerd[1827]: time="2026-04-21T12:01:57.304413800Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 21 12:01:57.305109 containerd[1827]: time="2026-04-21T12:01:57.304446500Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 21 12:01:57.305109 containerd[1827]: time="2026-04-21T12:01:57.304467800Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 21 12:01:57.305109 containerd[1827]: time="2026-04-21T12:01:57.304488400Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 21 12:01:57.305109 containerd[1827]: time="2026-04-21T12:01:57.304533700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 21 12:01:57.305109 containerd[1827]: time="2026-04-21T12:01:57.304554000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 21 12:01:57.305109 containerd[1827]: time="2026-04-21T12:01:57.304570300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 21 12:01:57.305109 containerd[1827]: time="2026-04-21T12:01:57.304589000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 21 12:01:57.305109 containerd[1827]: time="2026-04-21T12:01:57.304604100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 21 12:01:57.305109 containerd[1827]: time="2026-04-21T12:01:57.304620200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 21 12:01:57.305109 containerd[1827]: time="2026-04-21T12:01:57.304635900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 21 12:01:57.305109 containerd[1827]: time="2026-04-21T12:01:57.304654500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 21 12:01:57.305109 containerd[1827]: time="2026-04-21T12:01:57.304670900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 21 12:01:57.305109 containerd[1827]: time="2026-04-21T12:01:57.304693800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 21 12:01:57.305603 containerd[1827]: time="2026-04-21T12:01:57.304710900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 21 12:01:57.305603 containerd[1827]: time="2026-04-21T12:01:57.304728800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 21 12:01:57.305603 containerd[1827]: time="2026-04-21T12:01:57.304747000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 21 12:01:57.305603 containerd[1827]: time="2026-04-21T12:01:57.304777300Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 21 12:01:57.305603 containerd[1827]: time="2026-04-21T12:01:57.304809600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 21 12:01:57.305603 containerd[1827]: time="2026-04-21T12:01:57.304828100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 21 12:01:57.305603 containerd[1827]: time="2026-04-21T12:01:57.304844400Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 21 12:01:57.305603 containerd[1827]: time="2026-04-21T12:01:57.304910800Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 21 12:01:57.305603 containerd[1827]: time="2026-04-21T12:01:57.304938100Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 21 12:01:57.305603 containerd[1827]: time="2026-04-21T12:01:57.304956700Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 21 12:01:57.305603 containerd[1827]: time="2026-04-21T12:01:57.304973400Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 21 12:01:57.305603 containerd[1827]: time="2026-04-21T12:01:57.304987300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 21 12:01:57.305603 containerd[1827]: time="2026-04-21T12:01:57.305006200Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 21 12:01:57.305603 containerd[1827]: time="2026-04-21T12:01:57.305024500Z" level=info msg="NRI interface is disabled by configuration." Apr 21 12:01:57.306075 containerd[1827]: time="2026-04-21T12:01:57.305044800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 21 12:01:57.306120 containerd[1827]: time="2026-04-21T12:01:57.305436000Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 21 12:01:57.306120 containerd[1827]: time="2026-04-21T12:01:57.305671100Z" level=info msg="Connect containerd service" Apr 21 12:01:57.306120 containerd[1827]: time="2026-04-21T12:01:57.305805100Z" level=info msg="using legacy CRI server" Apr 21 12:01:57.306120 containerd[1827]: time="2026-04-21T12:01:57.305818300Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 21 12:01:57.306120 containerd[1827]: time="2026-04-21T12:01:57.305987600Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 21 12:01:57.307147 containerd[1827]: time="2026-04-21T12:01:57.307107400Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 21 12:01:57.307529 containerd[1827]: time="2026-04-21T12:01:57.307289600Z" level=info msg="Start subscribing containerd event" Apr 21 12:01:57.307529 containerd[1827]: time="2026-04-21T12:01:57.307373300Z" level=info msg="Start recovering state" Apr 21 12:01:57.307529 containerd[1827]: time="2026-04-21T12:01:57.307452800Z" level=info msg="Start event monitor" Apr 21 12:01:57.307529 containerd[1827]: time="2026-04-21T12:01:57.307472700Z" level=info msg="Start snapshots syncer" Apr 21 12:01:57.308949 containerd[1827]: time="2026-04-21T12:01:57.308237700Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 21 12:01:57.308949 containerd[1827]: time="2026-04-21T12:01:57.308311700Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 21 12:01:57.310907 containerd[1827]: time="2026-04-21T12:01:57.310473500Z" level=info msg="Start cni network conf syncer for default" Apr 21 12:01:57.310907 containerd[1827]: time="2026-04-21T12:01:57.310533100Z" level=info msg="Start streaming server" Apr 21 12:01:57.310777 systemd[1]: Started containerd.service - containerd container runtime. Apr 21 12:01:57.315231 containerd[1827]: time="2026-04-21T12:01:57.314941300Z" level=info msg="containerd successfully booted in 0.078430s" Apr 21 12:01:57.589696 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 12:01:57.595411 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 21 12:01:57.595731 (kubelet)[1964]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 12:01:57.599597 systemd[1]: Startup finished in 13.675s (kernel) + 15.251s (userspace) = 28.927s. Apr 21 12:01:57.981581 login[1942]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 21 12:01:57.983385 login[1941]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 21 12:01:57.995142 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 21 12:01:58.001722 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 21 12:01:58.011254 systemd-logind[1793]: New session 2 of user core. Apr 21 12:01:58.020359 systemd-logind[1793]: New session 1 of user core. Apr 21 12:01:58.051453 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 21 12:01:58.060869 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 21 12:01:58.074329 (systemd)[1977]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 21 12:01:58.261470 systemd[1977]: Queued start job for default target default.target. Apr 21 12:01:58.262505 systemd[1977]: Created slice app.slice - User Application Slice. Apr 21 12:01:58.262539 systemd[1977]: Reached target paths.target - Paths. Apr 21 12:01:58.262556 systemd[1977]: Reached target timers.target - Timers. Apr 21 12:01:58.276607 systemd[1977]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 21 12:01:58.283483 systemd[1977]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 21 12:01:58.283579 systemd[1977]: Reached target sockets.target - Sockets. Apr 21 12:01:58.283601 systemd[1977]: Reached target basic.target - Basic System. Apr 21 12:01:58.283651 systemd[1977]: Reached target default.target - Main User Target. Apr 21 12:01:58.283690 systemd[1977]: Startup finished in 200ms. Apr 21 12:01:58.283845 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 21 12:01:58.288190 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 21 12:01:58.294839 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 21 12:01:58.434078 kubelet[1964]: E0421 12:01:58.434019 1964 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 12:01:58.436351 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 12:01:58.436600 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 12:01:59.159924 waagent[1937]: 2026-04-21T12:01:59.159810Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Apr 21 12:01:59.203009 waagent[1937]: 2026-04-21T12:01:59.160589Z INFO Daemon Daemon OS: flatcar 4081.3.7 Apr 21 12:01:59.203009 waagent[1937]: 2026-04-21T12:01:59.163538Z INFO Daemon Daemon Python: 3.11.9 Apr 21 12:01:59.203009 waagent[1937]: 2026-04-21T12:01:59.164839Z INFO Daemon Daemon Run daemon Apr 21 12:01:59.203009 waagent[1937]: 2026-04-21T12:01:59.165763Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.7' Apr 21 12:01:59.203009 waagent[1937]: 2026-04-21T12:01:59.166698Z INFO Daemon Daemon Using waagent for provisioning Apr 21 12:01:59.203009 waagent[1937]: 2026-04-21T12:01:59.167864Z INFO Daemon Daemon Activate resource disk Apr 21 12:01:59.203009 waagent[1937]: 2026-04-21T12:01:59.168243Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Apr 21 12:01:59.203009 waagent[1937]: 2026-04-21T12:01:59.173150Z INFO Daemon Daemon Found device: None Apr 21 12:01:59.203009 waagent[1937]: 2026-04-21T12:01:59.174354Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Apr 21 12:01:59.203009 waagent[1937]: 2026-04-21T12:01:59.175363Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Apr 21 12:01:59.203009 waagent[1937]: 2026-04-21T12:01:59.178338Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 21 12:01:59.203009 waagent[1937]: 2026-04-21T12:01:59.179175Z INFO Daemon Daemon Running default provisioning handler Apr 21 12:01:59.206712 waagent[1937]: 2026-04-21T12:01:59.206585Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Apr 21 12:01:59.215220 waagent[1937]: 2026-04-21T12:01:59.215144Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Apr 21 12:01:59.220388 waagent[1937]: 2026-04-21T12:01:59.220318Z INFO Daemon Daemon cloud-init is enabled: False Apr 21 12:01:59.223612 waagent[1937]: 2026-04-21T12:01:59.223554Z INFO Daemon Daemon Copying ovf-env.xml Apr 21 12:01:59.330526 waagent[1937]: 2026-04-21T12:01:59.324265Z INFO Daemon Daemon Successfully mounted dvd Apr 21 12:01:59.340192 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Apr 21 12:01:59.352157 waagent[1937]: 2026-04-21T12:01:59.340673Z INFO Daemon Daemon Detect protocol endpoint Apr 21 12:01:59.352157 waagent[1937]: 2026-04-21T12:01:59.340994Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 21 12:01:59.352157 waagent[1937]: 2026-04-21T12:01:59.342336Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Apr 21 12:01:59.352157 waagent[1937]: 2026-04-21T12:01:59.343360Z INFO Daemon Daemon Test for route to 168.63.129.16 Apr 21 12:01:59.352157 waagent[1937]: 2026-04-21T12:01:59.344572Z INFO Daemon Daemon Route to 168.63.129.16 exists Apr 21 12:01:59.352157 waagent[1937]: 2026-04-21T12:01:59.345638Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Apr 21 12:01:59.401308 waagent[1937]: 2026-04-21T12:01:59.401240Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Apr 21 12:01:59.410745 waagent[1937]: 2026-04-21T12:01:59.401811Z INFO Daemon Daemon Wire protocol version:2012-11-30 Apr 21 12:01:59.410745 waagent[1937]: 2026-04-21T12:01:59.402741Z INFO Daemon Daemon Server preferred version:2015-04-05 Apr 21 12:01:59.545830 waagent[1937]: 2026-04-21T12:01:59.545716Z INFO Daemon Daemon Initializing goal state during protocol detection Apr 21 12:01:59.552559 waagent[1937]: 2026-04-21T12:01:59.546239Z INFO Daemon Daemon Forcing an update of the goal state. Apr 21 12:01:59.554678 waagent[1937]: 2026-04-21T12:01:59.554619Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Apr 21 12:01:59.568741 waagent[1937]: 2026-04-21T12:01:59.568686Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.181 Apr 21 12:01:59.590533 waagent[1937]: 2026-04-21T12:01:59.569404Z INFO Daemon Apr 21 12:01:59.590533 waagent[1937]: 2026-04-21T12:01:59.570520Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 13311fc4-8123-4878-adb3-c0bc74e45b8a eTag: 7743306931409380242 source: Fabric] Apr 21 12:01:59.590533 waagent[1937]: 2026-04-21T12:01:59.571334Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Apr 21 12:01:59.590533 waagent[1937]: 2026-04-21T12:01:59.572587Z INFO Daemon Apr 21 12:01:59.590533 waagent[1937]: 2026-04-21T12:01:59.573179Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Apr 21 12:01:59.590533 waagent[1937]: 2026-04-21T12:01:59.577487Z INFO Daemon Daemon Downloading artifacts profile blob Apr 21 12:01:59.698726 waagent[1937]: 2026-04-21T12:01:59.698579Z INFO Daemon Downloaded certificate {'thumbprint': '628F6B1E409FB08022ECFE54ECA19F510C212A7F', 'hasPrivateKey': True} Apr 21 12:01:59.705360 waagent[1937]: 2026-04-21T12:01:59.705284Z INFO Daemon Fetch goal state completed Apr 21 12:01:59.739642 waagent[1937]: 2026-04-21T12:01:59.739532Z INFO Daemon Daemon Starting provisioning Apr 21 12:01:59.757036 waagent[1937]: 2026-04-21T12:01:59.739983Z INFO Daemon Daemon Handle ovf-env.xml. Apr 21 12:01:59.757036 waagent[1937]: 2026-04-21T12:01:59.741250Z INFO Daemon Daemon Set hostname [ci-4081.3.7-a-fffe528a55] Apr 21 12:01:59.757036 waagent[1937]: 2026-04-21T12:01:59.744803Z INFO Daemon Daemon Publish hostname [ci-4081.3.7-a-fffe528a55] Apr 21 12:01:59.757036 waagent[1937]: 2026-04-21T12:01:59.745377Z INFO Daemon Daemon Examine /proc/net/route for primary interface Apr 21 12:01:59.757036 waagent[1937]: 2026-04-21T12:01:59.746449Z INFO Daemon Daemon Primary interface is [eth0] Apr 21 12:01:59.827216 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 12:01:59.827229 systemd-networkd[1402]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 12:01:59.827288 systemd-networkd[1402]: eth0: DHCP lease lost Apr 21 12:01:59.828768 waagent[1937]: 2026-04-21T12:01:59.828673Z INFO Daemon Daemon Create user account if not exists Apr 21 12:01:59.837529 waagent[1937]: 2026-04-21T12:01:59.829096Z INFO Daemon Daemon User core already exists, skip useradd Apr 21 12:01:59.837529 waagent[1937]: 2026-04-21T12:01:59.830149Z INFO Daemon Daemon Configure sudoer Apr 21 12:01:59.837529 waagent[1937]: 2026-04-21T12:01:59.831450Z INFO Daemon Daemon Configure sshd Apr 21 12:01:59.837529 waagent[1937]: 2026-04-21T12:01:59.832414Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Apr 21 12:01:59.837529 waagent[1937]: 2026-04-21T12:01:59.832767Z INFO Daemon Daemon Deploy ssh public key. Apr 21 12:01:59.848592 systemd-networkd[1402]: eth0: DHCPv6 lease lost Apr 21 12:01:59.876571 systemd-networkd[1402]: eth0: DHCPv4 address 10.0.0.5/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 21 12:02:00.953311 waagent[1937]: 2026-04-21T12:02:00.953238Z INFO Daemon Daemon Provisioning complete Apr 21 12:02:00.968321 waagent[1937]: 2026-04-21T12:02:00.968253Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Apr 21 12:02:00.972318 waagent[1937]: 2026-04-21T12:02:00.972248Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Apr 21 12:02:00.977909 waagent[1937]: 2026-04-21T12:02:00.977847Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Apr 21 12:02:01.105537 waagent[2032]: 2026-04-21T12:02:01.105428Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Apr 21 12:02:01.106021 waagent[2032]: 2026-04-21T12:02:01.105630Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.7 Apr 21 12:02:01.106021 waagent[2032]: 2026-04-21T12:02:01.105721Z INFO ExtHandler ExtHandler Python: 3.11.9 Apr 21 12:02:01.151866 waagent[2032]: 2026-04-21T12:02:01.151757Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.7; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Apr 21 12:02:01.152113 waagent[2032]: 2026-04-21T12:02:01.152059Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 21 12:02:01.152201 waagent[2032]: 2026-04-21T12:02:01.152164Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 21 12:02:01.159888 waagent[2032]: 2026-04-21T12:02:01.159814Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Apr 21 12:02:01.165028 waagent[2032]: 2026-04-21T12:02:01.164967Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.181 Apr 21 12:02:01.165505 waagent[2032]: 2026-04-21T12:02:01.165449Z INFO ExtHandler Apr 21 12:02:01.165611 waagent[2032]: 2026-04-21T12:02:01.165571Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: f5198b96-f1b9-4a26-a2a1-b8f2c7cf7354 eTag: 7743306931409380242 source: Fabric] Apr 21 12:02:01.165946 waagent[2032]: 2026-04-21T12:02:01.165894Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Apr 21 12:02:01.166545 waagent[2032]: 2026-04-21T12:02:01.166473Z INFO ExtHandler Apr 21 12:02:01.166627 waagent[2032]: 2026-04-21T12:02:01.166588Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Apr 21 12:02:01.169463 waagent[2032]: 2026-04-21T12:02:01.169419Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Apr 21 12:02:01.228269 waagent[2032]: 2026-04-21T12:02:01.228116Z INFO ExtHandler Downloaded certificate {'thumbprint': '628F6B1E409FB08022ECFE54ECA19F510C212A7F', 'hasPrivateKey': True} Apr 21 12:02:01.228840 waagent[2032]: 2026-04-21T12:02:01.228775Z INFO ExtHandler Fetch goal state completed Apr 21 12:02:01.242951 waagent[2032]: 2026-04-21T12:02:01.242878Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 2032 Apr 21 12:02:01.243125 waagent[2032]: 2026-04-21T12:02:01.243069Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Apr 21 12:02:01.244755 waagent[2032]: 2026-04-21T12:02:01.244694Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.7', '', 'Flatcar Container Linux by Kinvolk'] Apr 21 12:02:01.245134 waagent[2032]: 2026-04-21T12:02:01.245083Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Apr 21 12:02:01.311733 waagent[2032]: 2026-04-21T12:02:01.311679Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Apr 21 12:02:01.311984 waagent[2032]: 2026-04-21T12:02:01.311934Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Apr 21 12:02:01.318975 waagent[2032]: 2026-04-21T12:02:01.318926Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Apr 21 12:02:01.326613 systemd[1]: Reloading requested from client PID 2045 ('systemctl') (unit waagent.service)... Apr 21 12:02:01.326634 systemd[1]: Reloading... Apr 21 12:02:01.428617 zram_generator::config[2085]: No configuration found. Apr 21 12:02:01.550525 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 12:02:01.631648 systemd[1]: Reloading finished in 304 ms. Apr 21 12:02:01.658757 waagent[2032]: 2026-04-21T12:02:01.658185Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Apr 21 12:02:01.666282 systemd[1]: Reloading requested from client PID 2141 ('systemctl') (unit waagent.service)... Apr 21 12:02:01.666300 systemd[1]: Reloading... Apr 21 12:02:01.743525 zram_generator::config[2171]: No configuration found. Apr 21 12:02:01.885029 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 12:02:01.966076 systemd[1]: Reloading finished in 299 ms. Apr 21 12:02:01.991758 waagent[2032]: 2026-04-21T12:02:01.991556Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Apr 21 12:02:01.991901 waagent[2032]: 2026-04-21T12:02:01.991783Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Apr 21 12:02:02.463202 waagent[2032]: 2026-04-21T12:02:02.463102Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Apr 21 12:02:02.463948 waagent[2032]: 2026-04-21T12:02:02.463883Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Apr 21 12:02:02.464803 waagent[2032]: 2026-04-21T12:02:02.464725Z INFO ExtHandler ExtHandler Starting env monitor service. Apr 21 12:02:02.465660 waagent[2032]: 2026-04-21T12:02:02.465584Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Apr 21 12:02:02.466046 waagent[2032]: 2026-04-21T12:02:02.465975Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 21 12:02:02.466168 waagent[2032]: 2026-04-21T12:02:02.466109Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 21 12:02:02.466300 waagent[2032]: 2026-04-21T12:02:02.466262Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 21 12:02:02.466416 waagent[2032]: 2026-04-21T12:02:02.466353Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Apr 21 12:02:02.467032 waagent[2032]: 2026-04-21T12:02:02.466775Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 21 12:02:02.467032 waagent[2032]: 2026-04-21T12:02:02.466976Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Apr 21 12:02:02.467280 waagent[2032]: 2026-04-21T12:02:02.467230Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Apr 21 12:02:02.467732 waagent[2032]: 2026-04-21T12:02:02.467671Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Apr 21 12:02:02.467852 waagent[2032]: 2026-04-21T12:02:02.467771Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Apr 21 12:02:02.468046 waagent[2032]: 2026-04-21T12:02:02.468003Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Apr 21 12:02:02.468046 waagent[2032]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Apr 21 12:02:02.468046 waagent[2032]: eth0 00000000 0100000A 0003 0 0 1024 00000000 0 0 0 Apr 21 12:02:02.468046 waagent[2032]: eth0 0000000A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Apr 21 12:02:02.468046 waagent[2032]: eth0 0100000A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Apr 21 12:02:02.468046 waagent[2032]: eth0 10813FA8 0100000A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 21 12:02:02.468046 waagent[2032]: eth0 FEA9FEA9 0100000A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 21 12:02:02.468714 waagent[2032]: 2026-04-21T12:02:02.468660Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Apr 21 12:02:02.469617 waagent[2032]: 2026-04-21T12:02:02.469392Z INFO EnvHandler ExtHandler Configure routes Apr 21 12:02:02.469617 waagent[2032]: 2026-04-21T12:02:02.469585Z INFO EnvHandler ExtHandler Gateway:None Apr 21 12:02:02.469735 waagent[2032]: 2026-04-21T12:02:02.469676Z INFO EnvHandler ExtHandler Routes:None Apr 21 12:02:02.473946 waagent[2032]: 2026-04-21T12:02:02.473902Z INFO ExtHandler ExtHandler Apr 21 12:02:02.474301 waagent[2032]: 2026-04-21T12:02:02.474261Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: c4128933-d4d7-4394-96fd-af8f2e4d7ab0 correlation 4cf6de52-f7b5-48b5-9a18-8afc278e131a created: 2026-04-21T12:00:59.073051Z] Apr 21 12:02:02.475425 waagent[2032]: 2026-04-21T12:02:02.475384Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Apr 21 12:02:02.477698 waagent[2032]: 2026-04-21T12:02:02.477637Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Apr 21 12:02:02.512162 waagent[2032]: 2026-04-21T12:02:02.512096Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: FB200213-576B-4665-80ED-58F33B6917B7;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Apr 21 12:02:02.556175 waagent[2032]: 2026-04-21T12:02:02.556084Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Apr 21 12:02:02.556175 waagent[2032]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 21 12:02:02.556175 waagent[2032]: pkts bytes target prot opt in out source destination Apr 21 12:02:02.556175 waagent[2032]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 21 12:02:02.556175 waagent[2032]: pkts bytes target prot opt in out source destination Apr 21 12:02:02.556175 waagent[2032]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Apr 21 12:02:02.556175 waagent[2032]: pkts bytes target prot opt in out source destination Apr 21 12:02:02.556175 waagent[2032]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 21 12:02:02.556175 waagent[2032]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 21 12:02:02.556175 waagent[2032]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 21 12:02:02.559669 waagent[2032]: 2026-04-21T12:02:02.559605Z INFO EnvHandler ExtHandler Current Firewall rules: Apr 21 12:02:02.559669 waagent[2032]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 21 12:02:02.559669 waagent[2032]: pkts bytes target prot opt in out source destination Apr 21 12:02:02.559669 waagent[2032]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 21 12:02:02.559669 waagent[2032]: pkts bytes target prot opt in out source destination Apr 21 12:02:02.559669 waagent[2032]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Apr 21 12:02:02.559669 waagent[2032]: pkts bytes target prot opt in out source destination Apr 21 12:02:02.559669 waagent[2032]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 21 12:02:02.559669 waagent[2032]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 21 12:02:02.559669 waagent[2032]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 21 12:02:02.560053 waagent[2032]: 2026-04-21T12:02:02.559944Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Apr 21 12:02:02.585252 waagent[2032]: 2026-04-21T12:02:02.585166Z INFO MonitorHandler ExtHandler Network interfaces: Apr 21 12:02:02.585252 waagent[2032]: Executing ['ip', '-a', '-o', 'link']: Apr 21 12:02:02.585252 waagent[2032]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Apr 21 12:02:02.585252 waagent[2032]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:d9:87:8c brd ff:ff:ff:ff:ff:ff Apr 21 12:02:02.585252 waagent[2032]: 3: enP23275s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:d9:87:8c brd ff:ff:ff:ff:ff:ff\ altname enP23275p0s2 Apr 21 12:02:02.585252 waagent[2032]: Executing ['ip', '-4', '-a', '-o', 'address']: Apr 21 12:02:02.585252 waagent[2032]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Apr 21 12:02:02.585252 waagent[2032]: 2: eth0 inet 10.0.0.5/24 metric 1024 brd 10.0.0.255 scope global eth0\ valid_lft forever preferred_lft forever Apr 21 12:02:02.585252 waagent[2032]: Executing ['ip', '-6', '-a', '-o', 'address']: Apr 21 12:02:02.585252 waagent[2032]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Apr 21 12:02:02.585252 waagent[2032]: 2: eth0 inet6 fe80::20d:3aff:fed9:878c/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Apr 21 12:02:08.687349 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 21 12:02:08.693759 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 12:02:08.841710 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 12:02:08.843068 (kubelet)[2280]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 12:02:09.576121 kubelet[2280]: E0421 12:02:09.576066 2280 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 12:02:09.580270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 12:02:09.581311 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 12:02:15.516610 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 21 12:02:15.521814 systemd[1]: Started sshd@0-10.0.0.5:22-20.229.252.112:55040.service - OpenSSH per-connection server daemon (20.229.252.112:55040). Apr 21 12:02:15.719621 sshd[2289]: Accepted publickey for core from 20.229.252.112 port 55040 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:02:15.720244 sshd[2289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:02:15.724480 systemd-logind[1793]: New session 3 of user core. Apr 21 12:02:15.731744 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 21 12:02:15.853285 systemd[1]: Started sshd@1-10.0.0.5:22-20.229.252.112:55044.service - OpenSSH per-connection server daemon (20.229.252.112:55044). Apr 21 12:02:15.972623 sshd[2294]: Accepted publickey for core from 20.229.252.112 port 55044 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:02:15.974178 sshd[2294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:02:15.979203 systemd-logind[1793]: New session 4 of user core. Apr 21 12:02:15.982786 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 21 12:02:16.083885 sshd[2294]: pam_unix(sshd:session): session closed for user core Apr 21 12:02:16.087683 systemd[1]: sshd@1-10.0.0.5:22-20.229.252.112:55044.service: Deactivated successfully. Apr 21 12:02:16.091139 systemd-logind[1793]: Session 4 logged out. Waiting for processes to exit. Apr 21 12:02:16.091810 systemd[1]: session-4.scope: Deactivated successfully. Apr 21 12:02:16.093913 systemd-logind[1793]: Removed session 4. Apr 21 12:02:16.105778 systemd[1]: Started sshd@2-10.0.0.5:22-20.229.252.112:55060.service - OpenSSH per-connection server daemon (20.229.252.112:55060). Apr 21 12:02:16.219977 sshd[2302]: Accepted publickey for core from 20.229.252.112 port 55060 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:02:16.220655 sshd[2302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:02:16.224800 systemd-logind[1793]: New session 5 of user core. Apr 21 12:02:16.237365 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 21 12:02:16.332584 sshd[2302]: pam_unix(sshd:session): session closed for user core Apr 21 12:02:16.335663 systemd[1]: sshd@2-10.0.0.5:22-20.229.252.112:55060.service: Deactivated successfully. Apr 21 12:02:16.339756 systemd-logind[1793]: Session 5 logged out. Waiting for processes to exit. Apr 21 12:02:16.341703 systemd[1]: session-5.scope: Deactivated successfully. Apr 21 12:02:16.343088 systemd-logind[1793]: Removed session 5. Apr 21 12:02:16.356782 systemd[1]: Started sshd@3-10.0.0.5:22-20.229.252.112:55066.service - OpenSSH per-connection server daemon (20.229.252.112:55066). Apr 21 12:02:16.471211 sshd[2310]: Accepted publickey for core from 20.229.252.112 port 55066 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:02:16.472756 sshd[2310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:02:16.476924 systemd-logind[1793]: New session 6 of user core. Apr 21 12:02:16.489816 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 21 12:02:16.593190 sshd[2310]: pam_unix(sshd:session): session closed for user core Apr 21 12:02:16.596946 systemd-logind[1793]: Session 6 logged out. Waiting for processes to exit. Apr 21 12:02:16.598229 systemd[1]: sshd@3-10.0.0.5:22-20.229.252.112:55066.service: Deactivated successfully. Apr 21 12:02:16.602354 systemd[1]: session-6.scope: Deactivated successfully. Apr 21 12:02:16.603423 systemd-logind[1793]: Removed session 6. Apr 21 12:02:16.617780 systemd[1]: Started sshd@4-10.0.0.5:22-20.229.252.112:55072.service - OpenSSH per-connection server daemon (20.229.252.112:55072). Apr 21 12:02:16.732797 sshd[2318]: Accepted publickey for core from 20.229.252.112 port 55072 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:02:16.734630 sshd[2318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:02:16.739563 systemd-logind[1793]: New session 7 of user core. Apr 21 12:02:16.748797 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 21 12:02:16.958018 sudo[2322]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 21 12:02:16.958442 sudo[2322]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 12:02:16.985048 sudo[2322]: pam_unix(sudo:session): session closed for user root Apr 21 12:02:17.002052 sshd[2318]: pam_unix(sshd:session): session closed for user core Apr 21 12:02:17.006336 systemd[1]: sshd@4-10.0.0.5:22-20.229.252.112:55072.service: Deactivated successfully. Apr 21 12:02:17.009887 systemd-logind[1793]: Session 7 logged out. Waiting for processes to exit. Apr 21 12:02:17.011202 systemd[1]: session-7.scope: Deactivated successfully. Apr 21 12:02:17.012759 systemd-logind[1793]: Removed session 7. Apr 21 12:02:17.022778 systemd[1]: Started sshd@5-10.0.0.5:22-20.229.252.112:55074.service - OpenSSH per-connection server daemon (20.229.252.112:55074). Apr 21 12:02:17.142153 sshd[2327]: Accepted publickey for core from 20.229.252.112 port 55074 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:02:17.142835 sshd[2327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:02:17.147989 systemd-logind[1793]: New session 8 of user core. Apr 21 12:02:17.157252 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 21 12:02:17.242484 sudo[2332]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 21 12:02:17.242905 sudo[2332]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 12:02:17.246662 sudo[2332]: pam_unix(sudo:session): session closed for user root Apr 21 12:02:17.251938 sudo[2331]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 21 12:02:17.252307 sudo[2331]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 12:02:17.270930 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 21 12:02:17.272850 auditctl[2335]: No rules Apr 21 12:02:17.273867 systemd[1]: audit-rules.service: Deactivated successfully. Apr 21 12:02:17.274226 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 21 12:02:17.289297 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 12:02:17.312327 augenrules[2354]: No rules Apr 21 12:02:17.314192 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 12:02:17.318739 sudo[2331]: pam_unix(sudo:session): session closed for user root Apr 21 12:02:17.335168 sshd[2327]: pam_unix(sshd:session): session closed for user core Apr 21 12:02:17.338832 systemd[1]: sshd@5-10.0.0.5:22-20.229.252.112:55074.service: Deactivated successfully. Apr 21 12:02:17.343874 systemd[1]: session-8.scope: Deactivated successfully. Apr 21 12:02:17.344525 systemd-logind[1793]: Session 8 logged out. Waiting for processes to exit. Apr 21 12:02:17.345560 systemd-logind[1793]: Removed session 8. Apr 21 12:02:17.359294 systemd[1]: Started sshd@6-10.0.0.5:22-20.229.252.112:55078.service - OpenSSH per-connection server daemon (20.229.252.112:55078). Apr 21 12:02:17.471205 sshd[2363]: Accepted publickey for core from 20.229.252.112 port 55078 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:02:17.472728 sshd[2363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:02:17.477875 systemd-logind[1793]: New session 9 of user core. Apr 21 12:02:17.480756 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 21 12:02:17.566114 sudo[2367]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 21 12:02:17.566523 sudo[2367]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 12:02:18.871877 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 21 12:02:18.872080 (dockerd)[2383]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 21 12:02:19.629316 chronyd[1799]: Selected source PHC0 Apr 21 12:02:19.830963 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 21 12:02:19.842320 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 12:02:20.042685 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 12:02:20.052933 (kubelet)[2400]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 12:02:20.665522 kubelet[2400]: E0421 12:02:20.665238 2400 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 12:02:20.668099 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 12:02:20.668413 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 12:02:21.111880 dockerd[2383]: time="2026-04-21T12:02:21.111726004Z" level=info msg="Starting up" Apr 21 12:02:21.991898 dockerd[2383]: time="2026-04-21T12:02:21.991841865Z" level=info msg="Loading containers: start." Apr 21 12:02:22.160517 kernel: Initializing XFRM netlink socket Apr 21 12:02:22.296695 systemd-networkd[1402]: docker0: Link UP Apr 21 12:02:22.326905 dockerd[2383]: time="2026-04-21T12:02:22.326858065Z" level=info msg="Loading containers: done." Apr 21 12:02:22.439747 dockerd[2383]: time="2026-04-21T12:02:22.439688865Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 21 12:02:22.439936 dockerd[2383]: time="2026-04-21T12:02:22.439828965Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 21 12:02:22.439984 dockerd[2383]: time="2026-04-21T12:02:22.439968065Z" level=info msg="Daemon has completed initialization" Apr 21 12:02:22.503276 dockerd[2383]: time="2026-04-21T12:02:22.503212065Z" level=info msg="API listen on /run/docker.sock" Apr 21 12:02:22.504021 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 21 12:02:23.156866 containerd[1827]: time="2026-04-21T12:02:23.156812165Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 21 12:02:23.946651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4258689919.mount: Deactivated successfully. Apr 21 12:02:25.755325 containerd[1827]: time="2026-04-21T12:02:25.755263765Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:02:25.757899 containerd[1827]: time="2026-04-21T12:02:25.757823865Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30193997" Apr 21 12:02:25.761179 containerd[1827]: time="2026-04-21T12:02:25.761116065Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:02:25.766931 containerd[1827]: time="2026-04-21T12:02:25.766858365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:02:25.768354 containerd[1827]: time="2026-04-21T12:02:25.768047565Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 2.6111793s" Apr 21 12:02:25.768354 containerd[1827]: time="2026-04-21T12:02:25.768101065Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 21 12:02:25.769236 containerd[1827]: time="2026-04-21T12:02:25.769140165Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 21 12:02:27.633360 containerd[1827]: time="2026-04-21T12:02:27.633296965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:02:27.636238 containerd[1827]: time="2026-04-21T12:02:27.636180765Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171455" Apr 21 12:02:27.640262 containerd[1827]: time="2026-04-21T12:02:27.640027886Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:02:27.645430 containerd[1827]: time="2026-04-21T12:02:27.645366404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:02:27.646881 containerd[1827]: time="2026-04-21T12:02:27.646458612Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 1.877177244s" Apr 21 12:02:27.646881 containerd[1827]: time="2026-04-21T12:02:27.646513822Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 21 12:02:27.647447 containerd[1827]: time="2026-04-21T12:02:27.647412194Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 21 12:02:29.172747 containerd[1827]: time="2026-04-21T12:02:29.172685171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:02:29.175481 containerd[1827]: time="2026-04-21T12:02:29.175417726Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289764" Apr 21 12:02:29.179280 containerd[1827]: time="2026-04-21T12:02:29.179222702Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:02:29.184312 containerd[1827]: time="2026-04-21T12:02:29.184250403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:02:29.185708 containerd[1827]: time="2026-04-21T12:02:29.185545629Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 1.53809223s" Apr 21 12:02:29.185708 containerd[1827]: time="2026-04-21T12:02:29.185585830Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 21 12:02:29.186348 containerd[1827]: time="2026-04-21T12:02:29.186287844Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 21 12:02:30.430370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount304067801.mount: Deactivated successfully. Apr 21 12:02:30.807707 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 21 12:02:30.816609 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 12:02:31.002965 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 12:02:31.015973 (kubelet)[2624]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 12:02:31.636103 kubelet[2624]: E0421 12:02:31.636035 2624 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 12:02:31.638719 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 12:02:31.639078 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 12:02:31.704163 containerd[1827]: time="2026-04-21T12:02:31.704106146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:02:31.710724 containerd[1827]: time="2026-04-21T12:02:31.710518075Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010719" Apr 21 12:02:31.714797 containerd[1827]: time="2026-04-21T12:02:31.714733559Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:02:31.719633 containerd[1827]: time="2026-04-21T12:02:31.719573556Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:02:31.720735 containerd[1827]: time="2026-04-21T12:02:31.720241070Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 2.533910726s" Apr 21 12:02:31.720735 containerd[1827]: time="2026-04-21T12:02:31.720281671Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 21 12:02:31.721060 containerd[1827]: time="2026-04-21T12:02:31.721035086Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 21 12:02:32.350509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3467464119.mount: Deactivated successfully. Apr 21 12:02:33.739343 containerd[1827]: time="2026-04-21T12:02:33.739282068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:02:33.742370 containerd[1827]: time="2026-04-21T12:02:33.742297728Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Apr 21 12:02:33.745565 containerd[1827]: time="2026-04-21T12:02:33.745509493Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:02:33.750559 containerd[1827]: time="2026-04-21T12:02:33.750486493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:02:33.751634 containerd[1827]: time="2026-04-21T12:02:33.751594915Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.030494928s" Apr 21 12:02:33.751717 containerd[1827]: time="2026-04-21T12:02:33.751640116Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 21 12:02:33.752692 containerd[1827]: time="2026-04-21T12:02:33.752205927Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 21 12:02:34.392016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount83614607.mount: Deactivated successfully. Apr 21 12:02:34.412022 containerd[1827]: time="2026-04-21T12:02:34.411956860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:02:34.415094 containerd[1827]: time="2026-04-21T12:02:34.415027522Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Apr 21 12:02:34.418603 containerd[1827]: time="2026-04-21T12:02:34.418549193Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:02:34.423840 containerd[1827]: time="2026-04-21T12:02:34.423771397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:02:34.424706 containerd[1827]: time="2026-04-21T12:02:34.424554813Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 672.314986ms" Apr 21 12:02:34.424706 containerd[1827]: time="2026-04-21T12:02:34.424600314Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 21 12:02:34.425396 containerd[1827]: time="2026-04-21T12:02:34.425366729Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 21 12:02:35.164413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1588056939.mount: Deactivated successfully. Apr 21 12:02:35.813519 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Apr 21 12:02:36.802800 containerd[1827]: time="2026-04-21T12:02:36.802736661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:02:36.805764 containerd[1827]: time="2026-04-21T12:02:36.805515837Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23719434" Apr 21 12:02:36.808800 containerd[1827]: time="2026-04-21T12:02:36.808734425Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:02:36.817198 containerd[1827]: time="2026-04-21T12:02:36.817067653Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:02:36.818124 containerd[1827]: time="2026-04-21T12:02:36.817951278Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 2.392546947s" Apr 21 12:02:36.818124 containerd[1827]: time="2026-04-21T12:02:36.817996979Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 21 12:02:39.111621 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 12:02:39.118766 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 12:02:39.159900 systemd[1]: Reloading requested from client PID 2782 ('systemctl') (unit session-9.scope)... Apr 21 12:02:39.160113 systemd[1]: Reloading... Apr 21 12:02:39.302529 zram_generator::config[2826]: No configuration found. Apr 21 12:02:39.429785 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 12:02:39.510293 systemd[1]: Reloading finished in 349 ms. Apr 21 12:02:39.560118 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 21 12:02:39.560235 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 21 12:02:39.560624 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 12:02:39.569212 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 12:02:39.931716 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 12:02:39.940914 (kubelet)[2904]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 12:02:39.978413 kubelet[2904]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 12:02:39.978413 kubelet[2904]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 21 12:02:39.978413 kubelet[2904]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 12:02:39.978978 kubelet[2904]: I0421 12:02:39.978518 2904 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 21 12:02:40.331579 kubelet[2904]: I0421 12:02:40.331535 2904 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 21 12:02:40.331579 kubelet[2904]: I0421 12:02:40.331568 2904 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 12:02:40.331904 kubelet[2904]: I0421 12:02:40.331881 2904 server.go:956] "Client rotation is on, will bootstrap in background" Apr 21 12:02:40.781253 kubelet[2904]: I0421 12:02:40.780768 2904 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 12:02:40.781986 kubelet[2904]: E0421 12:02:40.781927 2904 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 21 12:02:40.791488 kubelet[2904]: E0421 12:02:40.791441 2904 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 12:02:40.791488 kubelet[2904]: I0421 12:02:40.791484 2904 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 21 12:02:40.795198 kubelet[2904]: I0421 12:02:40.795171 2904 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 21 12:02:40.795727 kubelet[2904]: I0421 12:02:40.795683 2904 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 12:02:40.795956 kubelet[2904]: I0421 12:02:40.795725 2904 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.7-a-fffe528a55","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 21 12:02:40.796122 kubelet[2904]: I0421 12:02:40.795966 2904 topology_manager.go:138] "Creating topology manager with none policy" Apr 21 12:02:40.796122 kubelet[2904]: I0421 12:02:40.795980 2904 container_manager_linux.go:303] "Creating device plugin manager" Apr 21 12:02:40.796193 kubelet[2904]: I0421 12:02:40.796184 2904 state_mem.go:36] "Initialized new in-memory state store" Apr 21 12:02:40.801188 kubelet[2904]: I0421 12:02:40.801163 2904 kubelet.go:480] "Attempting to sync node with API server" Apr 21 12:02:40.801303 kubelet[2904]: I0421 12:02:40.801205 2904 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 12:02:40.801303 kubelet[2904]: I0421 12:02:40.801245 2904 kubelet.go:386] "Adding apiserver pod source" Apr 21 12:02:40.801303 kubelet[2904]: I0421 12:02:40.801279 2904 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 12:02:40.808526 kubelet[2904]: E0421 12:02:40.807699 2904 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.7-a-fffe528a55&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 21 12:02:40.808526 kubelet[2904]: I0421 12:02:40.807834 2904 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 12:02:40.808526 kubelet[2904]: I0421 12:02:40.808510 2904 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 12:02:40.809530 kubelet[2904]: W0421 12:02:40.809484 2904 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 21 12:02:40.814707 kubelet[2904]: I0421 12:02:40.814685 2904 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 21 12:02:40.814806 kubelet[2904]: I0421 12:02:40.814740 2904 server.go:1289] "Started kubelet" Apr 21 12:02:40.817517 kubelet[2904]: I0421 12:02:40.816407 2904 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 21 12:02:40.821109 kubelet[2904]: E0421 12:02:40.821069 2904 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 21 12:02:40.823521 kubelet[2904]: E0421 12:02:40.823066 2904 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 12:02:40.823521 kubelet[2904]: I0421 12:02:40.823127 2904 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 12:02:40.824334 kubelet[2904]: I0421 12:02:40.824309 2904 server.go:317] "Adding debug handlers to kubelet server" Apr 21 12:02:40.828147 kubelet[2904]: I0421 12:02:40.826844 2904 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 21 12:02:40.828147 kubelet[2904]: E0421 12:02:40.827119 2904 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.7-a-fffe528a55\" not found" Apr 21 12:02:40.828859 kubelet[2904]: I0421 12:02:40.828803 2904 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 12:02:40.829140 kubelet[2904]: I0421 12:02:40.829111 2904 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 12:02:40.830440 kubelet[2904]: I0421 12:02:40.830414 2904 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 21 12:02:40.830537 kubelet[2904]: I0421 12:02:40.830474 2904 reconciler.go:26] "Reconciler: start to sync state" Apr 21 12:02:40.833083 kubelet[2904]: I0421 12:02:40.833052 2904 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 12:02:40.835231 kubelet[2904]: E0421 12:02:40.834592 2904 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.7-a-fffe528a55?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="200ms" Apr 21 12:02:40.836026 kubelet[2904]: E0421 12:02:40.834715 2904 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.7-a-fffe528a55.18a85d9817e119c2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.7-a-fffe528a55,UID:ci-4081.3.7-a-fffe528a55,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.7-a-fffe528a55,},FirstTimestamp:2026-04-21 12:02:40.814700994 +0000 UTC m=+0.869932738,LastTimestamp:2026-04-21 12:02:40.814700994 +0000 UTC m=+0.869932738,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.7-a-fffe528a55,}" Apr 21 12:02:40.836357 kubelet[2904]: I0421 12:02:40.836333 2904 factory.go:223] Registration of the systemd container factory successfully Apr 21 12:02:40.836453 kubelet[2904]: I0421 12:02:40.836429 2904 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 12:02:40.837610 kubelet[2904]: E0421 12:02:40.837581 2904 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 21 12:02:40.837841 kubelet[2904]: I0421 12:02:40.837818 2904 factory.go:223] Registration of the containerd container factory successfully Apr 21 12:02:40.872793 kubelet[2904]: I0421 12:02:40.872739 2904 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 21 12:02:40.874553 kubelet[2904]: I0421 12:02:40.874417 2904 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 21 12:02:40.874553 kubelet[2904]: I0421 12:02:40.874448 2904 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 21 12:02:40.874553 kubelet[2904]: I0421 12:02:40.874476 2904 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 12:02:40.874553 kubelet[2904]: I0421 12:02:40.874489 2904 kubelet.go:2436] "Starting kubelet main sync loop" Apr 21 12:02:40.874748 kubelet[2904]: E0421 12:02:40.874559 2904 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 12:02:40.879317 kubelet[2904]: E0421 12:02:40.879283 2904 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 21 12:02:40.897818 kubelet[2904]: I0421 12:02:40.897717 2904 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 21 12:02:40.897818 kubelet[2904]: I0421 12:02:40.897739 2904 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 21 12:02:40.897818 kubelet[2904]: I0421 12:02:40.897763 2904 state_mem.go:36] "Initialized new in-memory state store" Apr 21 12:02:40.903284 kubelet[2904]: I0421 12:02:40.903258 2904 policy_none.go:49] "None policy: Start" Apr 21 12:02:40.903284 kubelet[2904]: I0421 12:02:40.903285 2904 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 21 12:02:40.903430 kubelet[2904]: I0421 12:02:40.903301 2904 state_mem.go:35] "Initializing new in-memory state store" Apr 21 12:02:40.911348 kubelet[2904]: E0421 12:02:40.911312 2904 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 12:02:40.911580 kubelet[2904]: I0421 12:02:40.911557 2904 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 21 12:02:40.911645 kubelet[2904]: I0421 12:02:40.911580 2904 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 12:02:40.912899 kubelet[2904]: I0421 12:02:40.912872 2904 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 21 12:02:40.915347 kubelet[2904]: E0421 12:02:40.915324 2904 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 12:02:40.915450 kubelet[2904]: E0421 12:02:40.915394 2904 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.7-a-fffe528a55\" not found" Apr 21 12:02:40.986343 kubelet[2904]: E0421 12:02:40.986211 2904 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.7-a-fffe528a55\" not found" node="ci-4081.3.7-a-fffe528a55" Apr 21 12:02:40.991414 kubelet[2904]: E0421 12:02:40.991382 2904 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.7-a-fffe528a55\" not found" node="ci-4081.3.7-a-fffe528a55" Apr 21 12:02:41.000225 kubelet[2904]: E0421 12:02:41.000195 2904 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.7-a-fffe528a55\" not found" node="ci-4081.3.7-a-fffe528a55" Apr 21 12:02:41.013717 kubelet[2904]: I0421 12:02:41.013688 2904 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.7-a-fffe528a55" Apr 21 12:02:41.014076 kubelet[2904]: E0421 12:02:41.014047 2904 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="ci-4081.3.7-a-fffe528a55" Apr 21 12:02:41.035673 kubelet[2904]: E0421 12:02:41.035536 2904 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.7-a-fffe528a55?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="400ms" Apr 21 12:02:41.132421 kubelet[2904]: I0421 12:02:41.132367 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/36e3607fa1c910368c99e8d157bd4641-kubeconfig\") pod \"kube-scheduler-ci-4081.3.7-a-fffe528a55\" (UID: \"36e3607fa1c910368c99e8d157bd4641\") " pod="kube-system/kube-scheduler-ci-4081.3.7-a-fffe528a55" Apr 21 12:02:41.132421 kubelet[2904]: I0421 12:02:41.132417 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f8ef841f25589be7612ae881b867ce03-ca-certs\") pod \"kube-apiserver-ci-4081.3.7-a-fffe528a55\" (UID: \"f8ef841f25589be7612ae881b867ce03\") " pod="kube-system/kube-apiserver-ci-4081.3.7-a-fffe528a55" Apr 21 12:02:41.132703 kubelet[2904]: I0421 12:02:41.132441 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f8ef841f25589be7612ae881b867ce03-k8s-certs\") pod \"kube-apiserver-ci-4081.3.7-a-fffe528a55\" (UID: \"f8ef841f25589be7612ae881b867ce03\") " pod="kube-system/kube-apiserver-ci-4081.3.7-a-fffe528a55" Apr 21 12:02:41.132703 kubelet[2904]: I0421 12:02:41.132462 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f8ef841f25589be7612ae881b867ce03-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.7-a-fffe528a55\" (UID: \"f8ef841f25589be7612ae881b867ce03\") " pod="kube-system/kube-apiserver-ci-4081.3.7-a-fffe528a55" Apr 21 12:02:41.132703 kubelet[2904]: I0421 12:02:41.132484 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e9cca83a2490e7a441ed9879014a3-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.7-a-fffe528a55\" (UID: \"fe5e9cca83a2490e7a441ed9879014a3\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-fffe528a55" Apr 21 12:02:41.132703 kubelet[2904]: I0421 12:02:41.132530 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fe5e9cca83a2490e7a441ed9879014a3-ca-certs\") pod \"kube-controller-manager-ci-4081.3.7-a-fffe528a55\" (UID: \"fe5e9cca83a2490e7a441ed9879014a3\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-fffe528a55" Apr 21 12:02:41.132703 kubelet[2904]: I0421 12:02:41.132551 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fe5e9cca83a2490e7a441ed9879014a3-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.7-a-fffe528a55\" (UID: \"fe5e9cca83a2490e7a441ed9879014a3\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-fffe528a55" Apr 21 12:02:41.132833 kubelet[2904]: I0421 12:02:41.132569 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fe5e9cca83a2490e7a441ed9879014a3-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.7-a-fffe528a55\" (UID: \"fe5e9cca83a2490e7a441ed9879014a3\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-fffe528a55" Apr 21 12:02:41.132833 kubelet[2904]: I0421 12:02:41.132598 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fe5e9cca83a2490e7a441ed9879014a3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.7-a-fffe528a55\" (UID: \"fe5e9cca83a2490e7a441ed9879014a3\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-fffe528a55" Apr 21 12:02:41.216334 kubelet[2904]: I0421 12:02:41.216293 2904 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.7-a-fffe528a55" Apr 21 12:02:41.216772 kubelet[2904]: E0421 12:02:41.216729 2904 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="ci-4081.3.7-a-fffe528a55" Apr 21 12:02:41.288073 containerd[1827]: time="2026-04-21T12:02:41.287940307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.7-a-fffe528a55,Uid:f8ef841f25589be7612ae881b867ce03,Namespace:kube-system,Attempt:0,}" Apr 21 12:02:41.293108 containerd[1827]: time="2026-04-21T12:02:41.292757809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.7-a-fffe528a55,Uid:fe5e9cca83a2490e7a441ed9879014a3,Namespace:kube-system,Attempt:0,}" Apr 21 12:02:41.302150 containerd[1827]: time="2026-04-21T12:02:41.301879803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.7-a-fffe528a55,Uid:36e3607fa1c910368c99e8d157bd4641,Namespace:kube-system,Attempt:0,}" Apr 21 12:02:41.436264 kubelet[2904]: E0421 12:02:41.436219 2904 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.7-a-fffe528a55?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="800ms" Apr 21 12:02:41.619166 kubelet[2904]: I0421 12:02:41.619115 2904 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.7-a-fffe528a55" Apr 21 12:02:41.619546 kubelet[2904]: E0421 12:02:41.619509 2904 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="ci-4081.3.7-a-fffe528a55" Apr 21 12:02:41.679716 kubelet[2904]: E0421 12:02:41.679656 2904 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 21 12:02:41.692555 update_engine[1797]: I20260421 12:02:41.692471 1797 update_attempter.cc:509] Updating boot flags... Apr 21 12:02:41.757521 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (2950) Apr 21 12:02:41.832814 kubelet[2904]: E0421 12:02:41.831903 2904 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 21 12:02:41.946472 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2797142342.mount: Deactivated successfully. Apr 21 12:02:41.970582 containerd[1827]: time="2026-04-21T12:02:41.970528004Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 12:02:41.973606 containerd[1827]: time="2026-04-21T12:02:41.973554768Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Apr 21 12:02:41.976252 containerd[1827]: time="2026-04-21T12:02:41.976209924Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 12:02:41.979379 containerd[1827]: time="2026-04-21T12:02:41.979342891Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 12:02:41.982005 containerd[1827]: time="2026-04-21T12:02:41.981958846Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 12:02:41.984718 containerd[1827]: time="2026-04-21T12:02:41.984682304Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 12:02:41.987271 containerd[1827]: time="2026-04-21T12:02:41.987190257Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 12:02:41.991178 containerd[1827]: time="2026-04-21T12:02:41.991144541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 12:02:41.992219 containerd[1827]: time="2026-04-21T12:02:41.991907258Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 703.868048ms" Apr 21 12:02:41.993353 containerd[1827]: time="2026-04-21T12:02:41.993316488Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 700.483776ms" Apr 21 12:02:41.996446 containerd[1827]: time="2026-04-21T12:02:41.996410353Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 694.458649ms" Apr 21 12:02:42.148742 kubelet[2904]: E0421 12:02:42.148680 2904 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.7-a-fffe528a55&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 21 12:02:42.236908 kubelet[2904]: E0421 12:02:42.236767 2904 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.7-a-fffe528a55?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="1.6s" Apr 21 12:02:42.323978 kubelet[2904]: E0421 12:02:42.323925 2904 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 21 12:02:42.423851 kubelet[2904]: I0421 12:02:42.423678 2904 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.7-a-fffe528a55" Apr 21 12:02:42.424147 kubelet[2904]: E0421 12:02:42.424112 2904 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="ci-4081.3.7-a-fffe528a55" Apr 21 12:02:42.885215 containerd[1827]: time="2026-04-21T12:02:42.884880323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 12:02:42.886043 containerd[1827]: time="2026-04-21T12:02:42.885547337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 12:02:42.886043 containerd[1827]: time="2026-04-21T12:02:42.885602538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:02:42.886043 containerd[1827]: time="2026-04-21T12:02:42.885725841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:02:42.888214 containerd[1827]: time="2026-04-21T12:02:42.887466177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 12:02:42.888214 containerd[1827]: time="2026-04-21T12:02:42.888139392Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 12:02:42.888214 containerd[1827]: time="2026-04-21T12:02:42.888165392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:02:42.888807 containerd[1827]: time="2026-04-21T12:02:42.888755505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:02:42.889279 containerd[1827]: time="2026-04-21T12:02:42.889192914Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 12:02:42.890622 containerd[1827]: time="2026-04-21T12:02:42.889270616Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 12:02:42.890622 containerd[1827]: time="2026-04-21T12:02:42.889352918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:02:42.892316 containerd[1827]: time="2026-04-21T12:02:42.890130834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:02:42.905602 kubelet[2904]: E0421 12:02:42.905563 2904 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 21 12:02:43.006008 containerd[1827]: time="2026-04-21T12:02:43.005794891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.7-a-fffe528a55,Uid:36e3607fa1c910368c99e8d157bd4641,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad031b6d444c3af564f5fb4d91f5f1dd6e1bb67c3f8710eea222a7f8767be868\"" Apr 21 12:02:43.023516 containerd[1827]: time="2026-04-21T12:02:43.022278041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.7-a-fffe528a55,Uid:fe5e9cca83a2490e7a441ed9879014a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a1ec51ad58c6fc675fe0ba39b68975d220f6c63870bf846e9af22a321361083\"" Apr 21 12:02:43.023516 containerd[1827]: time="2026-04-21T12:02:43.022734050Z" level=info msg="CreateContainer within sandbox \"ad031b6d444c3af564f5fb4d91f5f1dd6e1bb67c3f8710eea222a7f8767be868\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 21 12:02:43.032460 containerd[1827]: time="2026-04-21T12:02:43.032416656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.7-a-fffe528a55,Uid:f8ef841f25589be7612ae881b867ce03,Namespace:kube-system,Attempt:0,} returns sandbox id \"8cbbed23632be6094eb5005eb06ab3002fc18a0961d538cb0273cb3065a0e641\"" Apr 21 12:02:43.036521 containerd[1827]: time="2026-04-21T12:02:43.036469642Z" level=info msg="CreateContainer within sandbox \"8a1ec51ad58c6fc675fe0ba39b68975d220f6c63870bf846e9af22a321361083\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 21 12:02:43.042408 containerd[1827]: time="2026-04-21T12:02:43.042375867Z" level=info msg="CreateContainer within sandbox \"8cbbed23632be6094eb5005eb06ab3002fc18a0961d538cb0273cb3065a0e641\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 21 12:02:43.083246 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1594136303.mount: Deactivated successfully. Apr 21 12:02:43.110793 containerd[1827]: time="2026-04-21T12:02:43.110732619Z" level=info msg="CreateContainer within sandbox \"ad031b6d444c3af564f5fb4d91f5f1dd6e1bb67c3f8710eea222a7f8767be868\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b54f62e8c2f0b2d8140d7f0d685eeda4aff057568a9a12a53b65d153979c5447\"" Apr 21 12:02:43.114117 containerd[1827]: time="2026-04-21T12:02:43.114068690Z" level=info msg="CreateContainer within sandbox \"8cbbed23632be6094eb5005eb06ab3002fc18a0961d538cb0273cb3065a0e641\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"02cdae91a5b631006376637696a195ec4c763e272978f0897e3a7055539c0972\"" Apr 21 12:02:43.114407 containerd[1827]: time="2026-04-21T12:02:43.114373097Z" level=info msg="StartContainer for \"b54f62e8c2f0b2d8140d7f0d685eeda4aff057568a9a12a53b65d153979c5447\"" Apr 21 12:02:43.120526 containerd[1827]: time="2026-04-21T12:02:43.118786090Z" level=info msg="CreateContainer within sandbox \"8a1ec51ad58c6fc675fe0ba39b68975d220f6c63870bf846e9af22a321361083\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0359f925f285839dbc3752a275bd62de1fed7c5e985c8f3a3c806c623b680fe1\"" Apr 21 12:02:43.120526 containerd[1827]: time="2026-04-21T12:02:43.119520006Z" level=info msg="StartContainer for \"02cdae91a5b631006376637696a195ec4c763e272978f0897e3a7055539c0972\"" Apr 21 12:02:43.127090 containerd[1827]: time="2026-04-21T12:02:43.127049366Z" level=info msg="StartContainer for \"0359f925f285839dbc3752a275bd62de1fed7c5e985c8f3a3c806c623b680fe1\"" Apr 21 12:02:43.246595 containerd[1827]: time="2026-04-21T12:02:43.246082894Z" level=info msg="StartContainer for \"b54f62e8c2f0b2d8140d7f0d685eeda4aff057568a9a12a53b65d153979c5447\" returns successfully" Apr 21 12:02:43.285529 containerd[1827]: time="2026-04-21T12:02:43.284774516Z" level=info msg="StartContainer for \"02cdae91a5b631006376637696a195ec4c763e272978f0897e3a7055539c0972\" returns successfully" Apr 21 12:02:43.312077 containerd[1827]: time="2026-04-21T12:02:43.312011394Z" level=info msg="StartContainer for \"0359f925f285839dbc3752a275bd62de1fed7c5e985c8f3a3c806c623b680fe1\" returns successfully" Apr 21 12:02:43.903090 kubelet[2904]: E0421 12:02:43.902790 2904 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.7-a-fffe528a55\" not found" node="ci-4081.3.7-a-fffe528a55" Apr 21 12:02:43.903665 kubelet[2904]: E0421 12:02:43.902954 2904 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.7-a-fffe528a55\" not found" node="ci-4081.3.7-a-fffe528a55" Apr 21 12:02:43.916592 kubelet[2904]: E0421 12:02:43.916551 2904 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.7-a-fffe528a55\" not found" node="ci-4081.3.7-a-fffe528a55" Apr 21 12:02:44.028154 kubelet[2904]: I0421 12:02:44.028114 2904 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.7-a-fffe528a55" Apr 21 12:02:44.934525 kubelet[2904]: E0421 12:02:44.934164 2904 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.7-a-fffe528a55\" not found" node="ci-4081.3.7-a-fffe528a55" Apr 21 12:02:44.937133 kubelet[2904]: E0421 12:02:44.936872 2904 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.7-a-fffe528a55\" not found" node="ci-4081.3.7-a-fffe528a55" Apr 21 12:02:44.991054 kubelet[2904]: E0421 12:02:44.990864 2904 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.7-a-fffe528a55\" not found" node="ci-4081.3.7-a-fffe528a55" Apr 21 12:02:45.163351 kubelet[2904]: I0421 12:02:45.163295 2904 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.7-a-fffe528a55" Apr 21 12:02:45.163351 kubelet[2904]: E0421 12:02:45.163360 2904 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081.3.7-a-fffe528a55\": node \"ci-4081.3.7-a-fffe528a55\" not found" Apr 21 12:02:45.228158 kubelet[2904]: I0421 12:02:45.228006 2904 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.7-a-fffe528a55" Apr 21 12:02:45.258599 kubelet[2904]: E0421 12:02:45.256923 2904 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.7-a-fffe528a55\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.7-a-fffe528a55" Apr 21 12:02:45.258599 kubelet[2904]: I0421 12:02:45.256972 2904 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.7-a-fffe528a55" Apr 21 12:02:45.262802 kubelet[2904]: E0421 12:02:45.262758 2904 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.7-a-fffe528a55\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.7-a-fffe528a55" Apr 21 12:02:45.262802 kubelet[2904]: I0421 12:02:45.262798 2904 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.7-a-fffe528a55" Apr 21 12:02:45.267638 kubelet[2904]: E0421 12:02:45.267595 2904 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.7-a-fffe528a55\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.7-a-fffe528a55" Apr 21 12:02:45.823081 kubelet[2904]: I0421 12:02:45.823013 2904 apiserver.go:52] "Watching apiserver" Apr 21 12:02:45.830707 kubelet[2904]: I0421 12:02:45.830662 2904 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 21 12:02:47.177832 kubelet[2904]: I0421 12:02:47.177786 2904 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.7-a-fffe528a55" Apr 21 12:02:47.190619 kubelet[2904]: I0421 12:02:47.189326 2904 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 21 12:02:47.216553 systemd[1]: Reloading requested from client PID 3220 ('systemctl') (unit session-9.scope)... Apr 21 12:02:47.216575 systemd[1]: Reloading... Apr 21 12:02:47.320532 zram_generator::config[3260]: No configuration found. Apr 21 12:02:47.459840 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 12:02:47.547598 systemd[1]: Reloading finished in 330 ms. Apr 21 12:02:47.583099 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 12:02:47.599913 systemd[1]: kubelet.service: Deactivated successfully. Apr 21 12:02:47.600251 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 12:02:47.611146 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 12:02:47.735687 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 12:02:47.748970 (kubelet)[3337]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 12:02:47.793707 kubelet[3337]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 12:02:47.793707 kubelet[3337]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 21 12:02:47.793707 kubelet[3337]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 12:02:47.794274 kubelet[3337]: I0421 12:02:47.793815 3337 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 21 12:02:47.802013 kubelet[3337]: I0421 12:02:47.801970 3337 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 21 12:02:47.802013 kubelet[3337]: I0421 12:02:47.802000 3337 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 12:02:47.802275 kubelet[3337]: I0421 12:02:47.802255 3337 server.go:956] "Client rotation is on, will bootstrap in background" Apr 21 12:02:47.803478 kubelet[3337]: I0421 12:02:47.803450 3337 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 21 12:02:47.805879 kubelet[3337]: I0421 12:02:47.805670 3337 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 12:02:47.809328 kubelet[3337]: E0421 12:02:47.809294 3337 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 12:02:47.809328 kubelet[3337]: I0421 12:02:47.809323 3337 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 21 12:02:47.813072 kubelet[3337]: I0421 12:02:47.812979 3337 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 21 12:02:47.814518 kubelet[3337]: I0421 12:02:47.813785 3337 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 12:02:47.814518 kubelet[3337]: I0421 12:02:47.813816 3337 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.7-a-fffe528a55","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 21 12:02:47.814518 kubelet[3337]: I0421 12:02:47.814138 3337 topology_manager.go:138] "Creating topology manager with none policy" Apr 21 12:02:47.814518 kubelet[3337]: I0421 12:02:47.814151 3337 container_manager_linux.go:303] "Creating device plugin manager" Apr 21 12:02:47.814518 kubelet[3337]: I0421 12:02:47.814212 3337 state_mem.go:36] "Initialized new in-memory state store" Apr 21 12:02:47.814854 kubelet[3337]: I0421 12:02:47.814428 3337 kubelet.go:480] "Attempting to sync node with API server" Apr 21 12:02:47.814854 kubelet[3337]: I0421 12:02:47.814443 3337 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 12:02:47.814854 kubelet[3337]: I0421 12:02:47.814473 3337 kubelet.go:386] "Adding apiserver pod source" Apr 21 12:02:47.814989 kubelet[3337]: I0421 12:02:47.814976 3337 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 12:02:47.821512 kubelet[3337]: I0421 12:02:47.820249 3337 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 12:02:47.821512 kubelet[3337]: I0421 12:02:47.820904 3337 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 12:02:47.828107 kubelet[3337]: I0421 12:02:47.825785 3337 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 21 12:02:47.828107 kubelet[3337]: I0421 12:02:47.825826 3337 server.go:1289] "Started kubelet" Apr 21 12:02:47.830528 kubelet[3337]: I0421 12:02:47.830362 3337 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 21 12:02:47.837527 kubelet[3337]: I0421 12:02:47.836296 3337 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 12:02:47.837527 kubelet[3337]: I0421 12:02:47.837344 3337 server.go:317] "Adding debug handlers to kubelet server" Apr 21 12:02:47.844520 kubelet[3337]: I0421 12:02:47.843713 3337 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 12:02:47.844520 kubelet[3337]: I0421 12:02:47.843935 3337 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 12:02:47.844520 kubelet[3337]: I0421 12:02:47.844162 3337 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 12:02:47.846514 kubelet[3337]: I0421 12:02:47.845506 3337 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 21 12:02:47.846514 kubelet[3337]: E0421 12:02:47.845754 3337 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.7-a-fffe528a55\" not found" Apr 21 12:02:47.850434 kubelet[3337]: I0421 12:02:47.850405 3337 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 21 12:02:47.850722 kubelet[3337]: I0421 12:02:47.850710 3337 reconciler.go:26] "Reconciler: start to sync state" Apr 21 12:02:47.861425 kubelet[3337]: I0421 12:02:47.860633 3337 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 21 12:02:47.862130 kubelet[3337]: I0421 12:02:47.861932 3337 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 21 12:02:47.862130 kubelet[3337]: I0421 12:02:47.861954 3337 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 21 12:02:47.862130 kubelet[3337]: I0421 12:02:47.861978 3337 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 12:02:47.862130 kubelet[3337]: I0421 12:02:47.861987 3337 kubelet.go:2436] "Starting kubelet main sync loop" Apr 21 12:02:47.862130 kubelet[3337]: E0421 12:02:47.862030 3337 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 12:02:47.868611 kubelet[3337]: E0421 12:02:47.868588 3337 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 12:02:47.871616 kubelet[3337]: I0421 12:02:47.871602 3337 factory.go:223] Registration of the containerd container factory successfully Apr 21 12:02:47.872515 kubelet[3337]: I0421 12:02:47.871756 3337 factory.go:223] Registration of the systemd container factory successfully Apr 21 12:02:47.872515 kubelet[3337]: I0421 12:02:47.871869 3337 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 12:02:47.922209 kubelet[3337]: I0421 12:02:47.922176 3337 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 21 12:02:47.922209 kubelet[3337]: I0421 12:02:47.922198 3337 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 21 12:02:47.922209 kubelet[3337]: I0421 12:02:47.922223 3337 state_mem.go:36] "Initialized new in-memory state store" Apr 21 12:02:47.922537 kubelet[3337]: I0421 12:02:47.922382 3337 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 21 12:02:47.922537 kubelet[3337]: I0421 12:02:47.922396 3337 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 21 12:02:47.922537 kubelet[3337]: I0421 12:02:47.922415 3337 policy_none.go:49] "None policy: Start" Apr 21 12:02:47.922537 kubelet[3337]: I0421 12:02:47.922429 3337 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 21 12:02:47.922537 kubelet[3337]: I0421 12:02:47.922441 3337 state_mem.go:35] "Initializing new in-memory state store" Apr 21 12:02:47.922728 kubelet[3337]: I0421 12:02:47.922592 3337 state_mem.go:75] "Updated machine memory state" Apr 21 12:02:47.924734 kubelet[3337]: E0421 12:02:47.923685 3337 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 12:02:47.924734 kubelet[3337]: I0421 12:02:47.923894 3337 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 21 12:02:47.924734 kubelet[3337]: I0421 12:02:47.923909 3337 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 12:02:47.926297 kubelet[3337]: I0421 12:02:47.925666 3337 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 21 12:02:47.926996 kubelet[3337]: E0421 12:02:47.926973 3337 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 12:02:47.964046 kubelet[3337]: I0421 12:02:47.963986 3337 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.7-a-fffe528a55" Apr 21 12:02:47.964252 kubelet[3337]: I0421 12:02:47.964228 3337 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.7-a-fffe528a55" Apr 21 12:02:47.964397 kubelet[3337]: I0421 12:02:47.963986 3337 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.7-a-fffe528a55" Apr 21 12:02:47.972615 kubelet[3337]: I0421 12:02:47.972571 3337 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 21 12:02:47.977819 kubelet[3337]: I0421 12:02:47.977793 3337 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 21 12:02:47.978422 kubelet[3337]: I0421 12:02:47.978397 3337 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 21 12:02:47.978536 kubelet[3337]: E0421 12:02:47.978454 3337 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.7-a-fffe528a55\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.7-a-fffe528a55" Apr 21 12:02:48.028110 kubelet[3337]: I0421 12:02:48.027000 3337 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.7-a-fffe528a55" Apr 21 12:02:48.039547 kubelet[3337]: I0421 12:02:48.039506 3337 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.7-a-fffe528a55" Apr 21 12:02:48.039681 kubelet[3337]: I0421 12:02:48.039593 3337 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.7-a-fffe528a55" Apr 21 12:02:48.051923 kubelet[3337]: I0421 12:02:48.051885 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fe5e9cca83a2490e7a441ed9879014a3-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.7-a-fffe528a55\" (UID: \"fe5e9cca83a2490e7a441ed9879014a3\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-fffe528a55" Apr 21 12:02:48.051923 kubelet[3337]: I0421 12:02:48.051928 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fe5e9cca83a2490e7a441ed9879014a3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.7-a-fffe528a55\" (UID: \"fe5e9cca83a2490e7a441ed9879014a3\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-fffe528a55" Apr 21 12:02:48.051923 kubelet[3337]: I0421 12:02:48.051956 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/36e3607fa1c910368c99e8d157bd4641-kubeconfig\") pod \"kube-scheduler-ci-4081.3.7-a-fffe528a55\" (UID: \"36e3607fa1c910368c99e8d157bd4641\") " pod="kube-system/kube-scheduler-ci-4081.3.7-a-fffe528a55" Apr 21 12:02:48.052234 kubelet[3337]: I0421 12:02:48.051992 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f8ef841f25589be7612ae881b867ce03-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.7-a-fffe528a55\" (UID: \"f8ef841f25589be7612ae881b867ce03\") " pod="kube-system/kube-apiserver-ci-4081.3.7-a-fffe528a55" Apr 21 12:02:48.052234 kubelet[3337]: I0421 12:02:48.052029 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e9cca83a2490e7a441ed9879014a3-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.7-a-fffe528a55\" (UID: \"fe5e9cca83a2490e7a441ed9879014a3\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-fffe528a55" Apr 21 12:02:48.052234 kubelet[3337]: I0421 12:02:48.052053 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f8ef841f25589be7612ae881b867ce03-ca-certs\") pod \"kube-apiserver-ci-4081.3.7-a-fffe528a55\" (UID: \"f8ef841f25589be7612ae881b867ce03\") " pod="kube-system/kube-apiserver-ci-4081.3.7-a-fffe528a55" Apr 21 12:02:48.052234 kubelet[3337]: I0421 12:02:48.052079 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f8ef841f25589be7612ae881b867ce03-k8s-certs\") pod \"kube-apiserver-ci-4081.3.7-a-fffe528a55\" (UID: \"f8ef841f25589be7612ae881b867ce03\") " pod="kube-system/kube-apiserver-ci-4081.3.7-a-fffe528a55" Apr 21 12:02:48.052234 kubelet[3337]: I0421 12:02:48.052100 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fe5e9cca83a2490e7a441ed9879014a3-ca-certs\") pod \"kube-controller-manager-ci-4081.3.7-a-fffe528a55\" (UID: \"fe5e9cca83a2490e7a441ed9879014a3\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-fffe528a55" Apr 21 12:02:48.052359 kubelet[3337]: I0421 12:02:48.052120 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fe5e9cca83a2490e7a441ed9879014a3-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.7-a-fffe528a55\" (UID: \"fe5e9cca83a2490e7a441ed9879014a3\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-fffe528a55" Apr 21 12:02:48.819747 kubelet[3337]: I0421 12:02:48.819383 3337 apiserver.go:52] "Watching apiserver" Apr 21 12:02:48.851735 kubelet[3337]: I0421 12:02:48.850808 3337 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 21 12:02:48.877528 kubelet[3337]: I0421 12:02:48.877435 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.7-a-fffe528a55" podStartSLOduration=1.87739178 podStartE2EDuration="1.87739178s" podCreationTimestamp="2026-04-21 12:02:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 12:02:48.863878975 +0000 UTC m=+1.110792625" watchObservedRunningTime="2026-04-21 12:02:48.87739178 +0000 UTC m=+1.124305430" Apr 21 12:02:48.891678 kubelet[3337]: I0421 12:02:48.891596 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.7-a-fffe528a55" podStartSLOduration=1.8915734020000001 podStartE2EDuration="1.891573402s" podCreationTimestamp="2026-04-21 12:02:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 12:02:48.878767412 +0000 UTC m=+1.125681162" watchObservedRunningTime="2026-04-21 12:02:48.891573402 +0000 UTC m=+1.138487052" Apr 21 12:02:48.891893 kubelet[3337]: I0421 12:02:48.891705 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.7-a-fffe528a55" podStartSLOduration=1.891696104 podStartE2EDuration="1.891696104s" podCreationTimestamp="2026-04-21 12:02:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 12:02:48.891398498 +0000 UTC m=+1.138312248" watchObservedRunningTime="2026-04-21 12:02:48.891696104 +0000 UTC m=+1.138609754" Apr 21 12:02:48.903897 kubelet[3337]: I0421 12:02:48.903051 3337 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.7-a-fffe528a55" Apr 21 12:02:48.903897 kubelet[3337]: I0421 12:02:48.903366 3337 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.7-a-fffe528a55" Apr 21 12:02:48.918990 kubelet[3337]: I0421 12:02:48.918946 3337 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 21 12:02:48.919185 kubelet[3337]: E0421 12:02:48.919016 3337 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.7-a-fffe528a55\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.7-a-fffe528a55" Apr 21 12:02:48.920652 kubelet[3337]: I0421 12:02:48.920361 3337 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 21 12:02:48.920652 kubelet[3337]: E0421 12:02:48.920409 3337 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.7-a-fffe528a55\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.7-a-fffe528a55" Apr 21 12:02:53.337347 kubelet[3337]: I0421 12:02:53.337304 3337 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 21 12:02:53.338308 containerd[1827]: time="2026-04-21T12:02:53.338200939Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 21 12:02:53.338763 kubelet[3337]: I0421 12:02:53.338568 3337 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 21 12:02:54.096911 kubelet[3337]: I0421 12:02:54.096864 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/031e499f-3546-47a3-ad9a-6e0cbf663667-kube-proxy\") pod \"kube-proxy-5qq6q\" (UID: \"031e499f-3546-47a3-ad9a-6e0cbf663667\") " pod="kube-system/kube-proxy-5qq6q" Apr 21 12:02:54.097196 kubelet[3337]: I0421 12:02:54.096922 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/031e499f-3546-47a3-ad9a-6e0cbf663667-xtables-lock\") pod \"kube-proxy-5qq6q\" (UID: \"031e499f-3546-47a3-ad9a-6e0cbf663667\") " pod="kube-system/kube-proxy-5qq6q" Apr 21 12:02:54.097196 kubelet[3337]: I0421 12:02:54.096952 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/031e499f-3546-47a3-ad9a-6e0cbf663667-lib-modules\") pod \"kube-proxy-5qq6q\" (UID: \"031e499f-3546-47a3-ad9a-6e0cbf663667\") " pod="kube-system/kube-proxy-5qq6q" Apr 21 12:02:54.097196 kubelet[3337]: I0421 12:02:54.096974 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqr5t\" (UniqueName: \"kubernetes.io/projected/031e499f-3546-47a3-ad9a-6e0cbf663667-kube-api-access-zqr5t\") pod \"kube-proxy-5qq6q\" (UID: \"031e499f-3546-47a3-ad9a-6e0cbf663667\") " pod="kube-system/kube-proxy-5qq6q" Apr 21 12:02:54.379068 containerd[1827]: time="2026-04-21T12:02:54.378939745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5qq6q,Uid:031e499f-3546-47a3-ad9a-6e0cbf663667,Namespace:kube-system,Attempt:0,}" Apr 21 12:02:54.434519 containerd[1827]: time="2026-04-21T12:02:54.434380882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 12:02:54.438140 containerd[1827]: time="2026-04-21T12:02:54.435643713Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 12:02:54.438140 containerd[1827]: time="2026-04-21T12:02:54.435835118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:02:54.438775 containerd[1827]: time="2026-04-21T12:02:54.437620361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:02:54.477514 systemd[1]: run-containerd-runc-k8s.io-77b1f098802df46934dc43e29044cc806dccf05e0a06f010b254c3b1503e381b-runc.lD1rzx.mount: Deactivated successfully. Apr 21 12:02:54.503280 containerd[1827]: time="2026-04-21T12:02:54.503234343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5qq6q,Uid:031e499f-3546-47a3-ad9a-6e0cbf663667,Namespace:kube-system,Attempt:0,} returns sandbox id \"77b1f098802df46934dc43e29044cc806dccf05e0a06f010b254c3b1503e381b\"" Apr 21 12:02:54.515232 containerd[1827]: time="2026-04-21T12:02:54.515187632Z" level=info msg="CreateContainer within sandbox \"77b1f098802df46934dc43e29044cc806dccf05e0a06f010b254c3b1503e381b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 21 12:02:54.623274 containerd[1827]: time="2026-04-21T12:02:54.623220438Z" level=info msg="CreateContainer within sandbox \"77b1f098802df46934dc43e29044cc806dccf05e0a06f010b254c3b1503e381b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2a44099e89ce2eebdf55e25c4ef2cb0735784e1b8b07a7a5b232873217283bb2\"" Apr 21 12:02:54.624093 containerd[1827]: time="2026-04-21T12:02:54.624044458Z" level=info msg="StartContainer for \"2a44099e89ce2eebdf55e25c4ef2cb0735784e1b8b07a7a5b232873217283bb2\"" Apr 21 12:02:54.688279 containerd[1827]: time="2026-04-21T12:02:54.686203857Z" level=info msg="StartContainer for \"2a44099e89ce2eebdf55e25c4ef2cb0735784e1b8b07a7a5b232873217283bb2\" returns successfully" Apr 21 12:02:54.700786 kubelet[3337]: I0421 12:02:54.700706 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c1c65f18-54fe-4fe9-9c59-bd56940ee4e6-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-c27kx\" (UID: \"c1c65f18-54fe-4fe9-9c59-bd56940ee4e6\") " pod="tigera-operator/tigera-operator-6bf85f8dd-c27kx" Apr 21 12:02:54.700786 kubelet[3337]: I0421 12:02:54.700771 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7859d\" (UniqueName: \"kubernetes.io/projected/c1c65f18-54fe-4fe9-9c59-bd56940ee4e6-kube-api-access-7859d\") pod \"tigera-operator-6bf85f8dd-c27kx\" (UID: \"c1c65f18-54fe-4fe9-9c59-bd56940ee4e6\") " pod="tigera-operator/tigera-operator-6bf85f8dd-c27kx" Apr 21 12:02:54.911249 containerd[1827]: time="2026-04-21T12:02:54.911197185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-c27kx,Uid:c1c65f18-54fe-4fe9-9c59-bd56940ee4e6,Namespace:tigera-operator,Attempt:0,}" Apr 21 12:02:54.969794 containerd[1827]: time="2026-04-21T12:02:54.969210684Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 12:02:54.969794 containerd[1827]: time="2026-04-21T12:02:54.969475191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 12:02:54.969794 containerd[1827]: time="2026-04-21T12:02:54.969548393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:02:54.971198 containerd[1827]: time="2026-04-21T12:02:54.970722821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:02:54.976679 kubelet[3337]: I0421 12:02:54.976411 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5qq6q" podStartSLOduration=0.976366057 podStartE2EDuration="976.366057ms" podCreationTimestamp="2026-04-21 12:02:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 12:02:54.955856762 +0000 UTC m=+7.202770412" watchObservedRunningTime="2026-04-21 12:02:54.976366057 +0000 UTC m=+7.223279907" Apr 21 12:02:55.048766 containerd[1827]: time="2026-04-21T12:02:55.048672001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-c27kx,Uid:c1c65f18-54fe-4fe9-9c59-bd56940ee4e6,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"37a5ff3fff26df39d11c1b14b592419fcbb6beba317008991307eb65c8b356b1\"" Apr 21 12:02:55.051405 containerd[1827]: time="2026-04-21T12:02:55.051170862Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 21 12:02:56.558115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount597921200.mount: Deactivated successfully. Apr 21 12:03:00.372404 containerd[1827]: time="2026-04-21T12:03:00.372337627Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:03:00.375384 containerd[1827]: time="2026-04-21T12:03:00.375259601Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 21 12:03:00.380207 containerd[1827]: time="2026-04-21T12:03:00.380143524Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:03:00.389529 containerd[1827]: time="2026-04-21T12:03:00.388105426Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:03:00.389529 containerd[1827]: time="2026-04-21T12:03:00.389125851Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 5.337913989s" Apr 21 12:03:00.389529 containerd[1827]: time="2026-04-21T12:03:00.389169452Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 21 12:03:00.402454 containerd[1827]: time="2026-04-21T12:03:00.402414587Z" level=info msg="CreateContainer within sandbox \"37a5ff3fff26df39d11c1b14b592419fcbb6beba317008991307eb65c8b356b1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 21 12:03:00.448014 containerd[1827]: time="2026-04-21T12:03:00.447955239Z" level=info msg="CreateContainer within sandbox \"37a5ff3fff26df39d11c1b14b592419fcbb6beba317008991307eb65c8b356b1\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1f4138e94c04ae654c26c97361b5f052449ed766d02b325ec99ad965ae55569c\"" Apr 21 12:03:00.448883 containerd[1827]: time="2026-04-21T12:03:00.448723159Z" level=info msg="StartContainer for \"1f4138e94c04ae654c26c97361b5f052449ed766d02b325ec99ad965ae55569c\"" Apr 21 12:03:00.507722 containerd[1827]: time="2026-04-21T12:03:00.507667850Z" level=info msg="StartContainer for \"1f4138e94c04ae654c26c97361b5f052449ed766d02b325ec99ad965ae55569c\" returns successfully" Apr 21 12:03:00.949767 kubelet[3337]: I0421 12:03:00.949696 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-c27kx" podStartSLOduration=1.608352155 podStartE2EDuration="6.949675629s" podCreationTimestamp="2026-04-21 12:02:54 +0000 UTC" firstStartedPulling="2026-04-21 12:02:55.050715951 +0000 UTC m=+7.297629601" lastFinishedPulling="2026-04-21 12:03:00.392039425 +0000 UTC m=+12.638953075" observedRunningTime="2026-04-21 12:03:00.949214518 +0000 UTC m=+13.196128168" watchObservedRunningTime="2026-04-21 12:03:00.949675629 +0000 UTC m=+13.196589379" Apr 21 12:03:06.975166 sudo[2367]: pam_unix(sudo:session): session closed for user root Apr 21 12:03:06.993035 sshd[2363]: pam_unix(sshd:session): session closed for user core Apr 21 12:03:06.999442 systemd[1]: sshd@6-10.0.0.5:22-20.229.252.112:55078.service: Deactivated successfully. Apr 21 12:03:07.005068 systemd-logind[1793]: Session 9 logged out. Waiting for processes to exit. Apr 21 12:03:07.005836 systemd[1]: session-9.scope: Deactivated successfully. Apr 21 12:03:07.008659 systemd-logind[1793]: Removed session 9. Apr 21 12:03:11.721653 kubelet[3337]: I0421 12:03:11.721593 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f8447438-2543-4b39-a724-95b7085e7fc9-tigera-ca-bundle\") pod \"calico-typha-656cbb6c5-bvdvs\" (UID: \"f8447438-2543-4b39-a724-95b7085e7fc9\") " pod="calico-system/calico-typha-656cbb6c5-bvdvs" Apr 21 12:03:11.721653 kubelet[3337]: I0421 12:03:11.721657 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f8447438-2543-4b39-a724-95b7085e7fc9-typha-certs\") pod \"calico-typha-656cbb6c5-bvdvs\" (UID: \"f8447438-2543-4b39-a724-95b7085e7fc9\") " pod="calico-system/calico-typha-656cbb6c5-bvdvs" Apr 21 12:03:11.722197 kubelet[3337]: I0421 12:03:11.721682 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd8xd\" (UniqueName: \"kubernetes.io/projected/f8447438-2543-4b39-a724-95b7085e7fc9-kube-api-access-qd8xd\") pod \"calico-typha-656cbb6c5-bvdvs\" (UID: \"f8447438-2543-4b39-a724-95b7085e7fc9\") " pod="calico-system/calico-typha-656cbb6c5-bvdvs" Apr 21 12:03:11.939873 containerd[1827]: time="2026-04-21T12:03:11.939811580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-656cbb6c5-bvdvs,Uid:f8447438-2543-4b39-a724-95b7085e7fc9,Namespace:calico-system,Attempt:0,}" Apr 21 12:03:11.989422 containerd[1827]: time="2026-04-21T12:03:11.989032433Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 12:03:11.989422 containerd[1827]: time="2026-04-21T12:03:11.989097035Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 12:03:11.989422 containerd[1827]: time="2026-04-21T12:03:11.989116735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:03:11.990350 containerd[1827]: time="2026-04-21T12:03:11.990281762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:03:12.080663 containerd[1827]: time="2026-04-21T12:03:12.079973663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-656cbb6c5-bvdvs,Uid:f8447438-2543-4b39-a724-95b7085e7fc9,Namespace:calico-system,Attempt:0,} returns sandbox id \"743e890955b29c9e7a0bb155d18ea502e2642e84e9ed4cc664c0923088d1a5f2\"" Apr 21 12:03:12.083826 containerd[1827]: time="2026-04-21T12:03:12.083451045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 21 12:03:12.127340 kubelet[3337]: I0421 12:03:12.127239 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/99dc552c-bf40-4024-9f89-894c827e11d3-tigera-ca-bundle\") pod \"calico-node-cr4cj\" (UID: \"99dc552c-bf40-4024-9f89-894c827e11d3\") " pod="calico-system/calico-node-cr4cj" Apr 21 12:03:12.127340 kubelet[3337]: I0421 12:03:12.127312 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/99dc552c-bf40-4024-9f89-894c827e11d3-bpffs\") pod \"calico-node-cr4cj\" (UID: \"99dc552c-bf40-4024-9f89-894c827e11d3\") " pod="calico-system/calico-node-cr4cj" Apr 21 12:03:12.127340 kubelet[3337]: I0421 12:03:12.127338 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/99dc552c-bf40-4024-9f89-894c827e11d3-cni-log-dir\") pod \"calico-node-cr4cj\" (UID: \"99dc552c-bf40-4024-9f89-894c827e11d3\") " pod="calico-system/calico-node-cr4cj" Apr 21 12:03:12.127340 kubelet[3337]: I0421 12:03:12.127357 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/99dc552c-bf40-4024-9f89-894c827e11d3-node-certs\") pod \"calico-node-cr4cj\" (UID: \"99dc552c-bf40-4024-9f89-894c827e11d3\") " pod="calico-system/calico-node-cr4cj" Apr 21 12:03:12.127851 kubelet[3337]: I0421 12:03:12.127377 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/99dc552c-bf40-4024-9f89-894c827e11d3-policysync\") pod \"calico-node-cr4cj\" (UID: \"99dc552c-bf40-4024-9f89-894c827e11d3\") " pod="calico-system/calico-node-cr4cj" Apr 21 12:03:12.127851 kubelet[3337]: I0421 12:03:12.127397 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/99dc552c-bf40-4024-9f89-894c827e11d3-sys-fs\") pod \"calico-node-cr4cj\" (UID: \"99dc552c-bf40-4024-9f89-894c827e11d3\") " pod="calico-system/calico-node-cr4cj" Apr 21 12:03:12.127851 kubelet[3337]: I0421 12:03:12.127431 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/99dc552c-bf40-4024-9f89-894c827e11d3-var-lib-calico\") pod \"calico-node-cr4cj\" (UID: \"99dc552c-bf40-4024-9f89-894c827e11d3\") " pod="calico-system/calico-node-cr4cj" Apr 21 12:03:12.127851 kubelet[3337]: I0421 12:03:12.127463 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/99dc552c-bf40-4024-9f89-894c827e11d3-flexvol-driver-host\") pod \"calico-node-cr4cj\" (UID: \"99dc552c-bf40-4024-9f89-894c827e11d3\") " pod="calico-system/calico-node-cr4cj" Apr 21 12:03:12.127851 kubelet[3337]: I0421 12:03:12.127479 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/99dc552c-bf40-4024-9f89-894c827e11d3-xtables-lock\") pod \"calico-node-cr4cj\" (UID: \"99dc552c-bf40-4024-9f89-894c827e11d3\") " pod="calico-system/calico-node-cr4cj" Apr 21 12:03:12.128003 kubelet[3337]: I0421 12:03:12.127514 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/99dc552c-bf40-4024-9f89-894c827e11d3-lib-modules\") pod \"calico-node-cr4cj\" (UID: \"99dc552c-bf40-4024-9f89-894c827e11d3\") " pod="calico-system/calico-node-cr4cj" Apr 21 12:03:12.128003 kubelet[3337]: I0421 12:03:12.127541 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/99dc552c-bf40-4024-9f89-894c827e11d3-var-run-calico\") pod \"calico-node-cr4cj\" (UID: \"99dc552c-bf40-4024-9f89-894c827e11d3\") " pod="calico-system/calico-node-cr4cj" Apr 21 12:03:12.128003 kubelet[3337]: I0421 12:03:12.127569 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/99dc552c-bf40-4024-9f89-894c827e11d3-nodeproc\") pod \"calico-node-cr4cj\" (UID: \"99dc552c-bf40-4024-9f89-894c827e11d3\") " pod="calico-system/calico-node-cr4cj" Apr 21 12:03:12.128003 kubelet[3337]: I0421 12:03:12.127590 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v66pb\" (UniqueName: \"kubernetes.io/projected/99dc552c-bf40-4024-9f89-894c827e11d3-kube-api-access-v66pb\") pod \"calico-node-cr4cj\" (UID: \"99dc552c-bf40-4024-9f89-894c827e11d3\") " pod="calico-system/calico-node-cr4cj" Apr 21 12:03:12.128003 kubelet[3337]: I0421 12:03:12.127613 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/99dc552c-bf40-4024-9f89-894c827e11d3-cni-bin-dir\") pod \"calico-node-cr4cj\" (UID: \"99dc552c-bf40-4024-9f89-894c827e11d3\") " pod="calico-system/calico-node-cr4cj" Apr 21 12:03:12.128121 kubelet[3337]: I0421 12:03:12.127632 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/99dc552c-bf40-4024-9f89-894c827e11d3-cni-net-dir\") pod \"calico-node-cr4cj\" (UID: \"99dc552c-bf40-4024-9f89-894c827e11d3\") " pod="calico-system/calico-node-cr4cj" Apr 21 12:03:12.237614 kubelet[3337]: E0421 12:03:12.237484 3337 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4vbvh" podUID="63866721-d13a-44fd-83f4-80f75344c988" Apr 21 12:03:12.239183 kubelet[3337]: E0421 12:03:12.239090 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.239183 kubelet[3337]: W0421 12:03:12.239113 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.239183 kubelet[3337]: E0421 12:03:12.239135 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.241803 kubelet[3337]: E0421 12:03:12.240525 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.241803 kubelet[3337]: W0421 12:03:12.240544 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.241803 kubelet[3337]: E0421 12:03:12.241744 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.300701 kubelet[3337]: E0421 12:03:12.300663 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.300701 kubelet[3337]: W0421 12:03:12.300692 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.300944 kubelet[3337]: E0421 12:03:12.300722 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.301038 kubelet[3337]: E0421 12:03:12.301023 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.301038 kubelet[3337]: W0421 12:03:12.301035 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.301141 kubelet[3337]: E0421 12:03:12.301051 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.301322 kubelet[3337]: E0421 12:03:12.301300 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.301322 kubelet[3337]: W0421 12:03:12.301323 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.301460 kubelet[3337]: E0421 12:03:12.301339 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.301691 kubelet[3337]: E0421 12:03:12.301672 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.301691 kubelet[3337]: W0421 12:03:12.301687 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.301862 kubelet[3337]: E0421 12:03:12.301703 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.302000 kubelet[3337]: E0421 12:03:12.301979 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.302000 kubelet[3337]: W0421 12:03:12.301995 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.302158 kubelet[3337]: E0421 12:03:12.302008 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.302232 kubelet[3337]: E0421 12:03:12.302216 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.302232 kubelet[3337]: W0421 12:03:12.302227 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.302383 kubelet[3337]: E0421 12:03:12.302240 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.302452 kubelet[3337]: E0421 12:03:12.302442 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.302522 kubelet[3337]: W0421 12:03:12.302451 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.302522 kubelet[3337]: E0421 12:03:12.302464 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.302836 kubelet[3337]: E0421 12:03:12.302698 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.302836 kubelet[3337]: W0421 12:03:12.302709 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.302836 kubelet[3337]: E0421 12:03:12.302721 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.303022 kubelet[3337]: E0421 12:03:12.302937 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.303022 kubelet[3337]: W0421 12:03:12.302949 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.303022 kubelet[3337]: E0421 12:03:12.302961 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.303230 kubelet[3337]: E0421 12:03:12.303147 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.303230 kubelet[3337]: W0421 12:03:12.303157 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.303230 kubelet[3337]: E0421 12:03:12.303168 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.303430 kubelet[3337]: E0421 12:03:12.303348 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.303430 kubelet[3337]: W0421 12:03:12.303358 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.303430 kubelet[3337]: E0421 12:03:12.303370 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.303644 kubelet[3337]: E0421 12:03:12.303577 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.303644 kubelet[3337]: W0421 12:03:12.303587 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.303644 kubelet[3337]: E0421 12:03:12.303599 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.303826 kubelet[3337]: E0421 12:03:12.303799 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.303826 kubelet[3337]: W0421 12:03:12.303811 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.303826 kubelet[3337]: E0421 12:03:12.303823 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.304028 kubelet[3337]: E0421 12:03:12.304008 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.304028 kubelet[3337]: W0421 12:03:12.304026 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.304165 kubelet[3337]: E0421 12:03:12.304039 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.304237 kubelet[3337]: E0421 12:03:12.304230 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.304291 kubelet[3337]: W0421 12:03:12.304240 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.304291 kubelet[3337]: E0421 12:03:12.304253 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.304452 kubelet[3337]: E0421 12:03:12.304437 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.304452 kubelet[3337]: W0421 12:03:12.304450 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.304609 kubelet[3337]: E0421 12:03:12.304462 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.304711 kubelet[3337]: E0421 12:03:12.304693 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.304711 kubelet[3337]: W0421 12:03:12.304708 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.304838 kubelet[3337]: E0421 12:03:12.304721 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.304924 kubelet[3337]: E0421 12:03:12.304909 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.304987 kubelet[3337]: W0421 12:03:12.304925 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.304987 kubelet[3337]: E0421 12:03:12.304938 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.305142 kubelet[3337]: E0421 12:03:12.305127 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.305142 kubelet[3337]: W0421 12:03:12.305140 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.305274 kubelet[3337]: E0421 12:03:12.305152 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.305350 kubelet[3337]: E0421 12:03:12.305341 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.305393 kubelet[3337]: W0421 12:03:12.305351 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.305393 kubelet[3337]: E0421 12:03:12.305364 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.333291 kubelet[3337]: E0421 12:03:12.333253 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.333291 kubelet[3337]: W0421 12:03:12.333280 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.333291 kubelet[3337]: E0421 12:03:12.333303 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.333631 kubelet[3337]: I0421 12:03:12.333343 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/63866721-d13a-44fd-83f4-80f75344c988-varrun\") pod \"csi-node-driver-4vbvh\" (UID: \"63866721-d13a-44fd-83f4-80f75344c988\") " pod="calico-system/csi-node-driver-4vbvh" Apr 21 12:03:12.333703 kubelet[3337]: E0421 12:03:12.333677 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.333703 kubelet[3337]: W0421 12:03:12.333698 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.333826 kubelet[3337]: E0421 12:03:12.333716 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.333826 kubelet[3337]: I0421 12:03:12.333752 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/63866721-d13a-44fd-83f4-80f75344c988-kubelet-dir\") pod \"csi-node-driver-4vbvh\" (UID: \"63866721-d13a-44fd-83f4-80f75344c988\") " pod="calico-system/csi-node-driver-4vbvh" Apr 21 12:03:12.334001 kubelet[3337]: E0421 12:03:12.333985 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.334001 kubelet[3337]: W0421 12:03:12.334000 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.334110 kubelet[3337]: E0421 12:03:12.334014 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.334110 kubelet[3337]: I0421 12:03:12.334051 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwshx\" (UniqueName: \"kubernetes.io/projected/63866721-d13a-44fd-83f4-80f75344c988-kube-api-access-mwshx\") pod \"csi-node-driver-4vbvh\" (UID: \"63866721-d13a-44fd-83f4-80f75344c988\") " pod="calico-system/csi-node-driver-4vbvh" Apr 21 12:03:12.334331 kubelet[3337]: E0421 12:03:12.334310 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.334331 kubelet[3337]: W0421 12:03:12.334325 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.334486 kubelet[3337]: E0421 12:03:12.334340 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.334486 kubelet[3337]: I0421 12:03:12.334367 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/63866721-d13a-44fd-83f4-80f75344c988-registration-dir\") pod \"csi-node-driver-4vbvh\" (UID: \"63866721-d13a-44fd-83f4-80f75344c988\") " pod="calico-system/csi-node-driver-4vbvh" Apr 21 12:03:12.334669 kubelet[3337]: E0421 12:03:12.334608 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.334669 kubelet[3337]: W0421 12:03:12.334621 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.334669 kubelet[3337]: E0421 12:03:12.334634 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.334669 kubelet[3337]: I0421 12:03:12.334663 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/63866721-d13a-44fd-83f4-80f75344c988-socket-dir\") pod \"csi-node-driver-4vbvh\" (UID: \"63866721-d13a-44fd-83f4-80f75344c988\") " pod="calico-system/csi-node-driver-4vbvh" Apr 21 12:03:12.334939 kubelet[3337]: E0421 12:03:12.334922 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.334939 kubelet[3337]: W0421 12:03:12.334937 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.335076 kubelet[3337]: E0421 12:03:12.334951 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.335165 kubelet[3337]: E0421 12:03:12.335151 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.335165 kubelet[3337]: W0421 12:03:12.335164 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.335296 kubelet[3337]: E0421 12:03:12.335180 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.335391 kubelet[3337]: E0421 12:03:12.335375 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.335391 kubelet[3337]: W0421 12:03:12.335389 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.335533 kubelet[3337]: E0421 12:03:12.335401 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.335628 kubelet[3337]: E0421 12:03:12.335611 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.335628 kubelet[3337]: W0421 12:03:12.335625 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.335782 kubelet[3337]: E0421 12:03:12.335638 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.335867 kubelet[3337]: E0421 12:03:12.335836 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.335867 kubelet[3337]: W0421 12:03:12.335847 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.335995 kubelet[3337]: E0421 12:03:12.335859 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.336151 kubelet[3337]: E0421 12:03:12.336132 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.336151 kubelet[3337]: W0421 12:03:12.336149 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.336257 kubelet[3337]: E0421 12:03:12.336163 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.336421 kubelet[3337]: E0421 12:03:12.336404 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.336421 kubelet[3337]: W0421 12:03:12.336418 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.336580 kubelet[3337]: E0421 12:03:12.336433 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.336683 kubelet[3337]: E0421 12:03:12.336666 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.336683 kubelet[3337]: W0421 12:03:12.336681 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.336796 kubelet[3337]: E0421 12:03:12.336694 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.336920 kubelet[3337]: E0421 12:03:12.336904 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.336920 kubelet[3337]: W0421 12:03:12.336919 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.337075 kubelet[3337]: E0421 12:03:12.336932 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.337152 kubelet[3337]: E0421 12:03:12.337142 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.337204 kubelet[3337]: W0421 12:03:12.337155 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.337204 kubelet[3337]: E0421 12:03:12.337169 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.412032 kubelet[3337]: E0421 12:03:12.412002 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.412032 kubelet[3337]: W0421 12:03:12.412065 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.412032 kubelet[3337]: E0421 12:03:12.412091 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.435795 kubelet[3337]: E0421 12:03:12.435751 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.435795 kubelet[3337]: W0421 12:03:12.435778 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.436124 kubelet[3337]: E0421 12:03:12.435811 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.436248 kubelet[3337]: E0421 12:03:12.436216 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.436342 kubelet[3337]: W0421 12:03:12.436272 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.436342 kubelet[3337]: E0421 12:03:12.436295 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.436670 kubelet[3337]: E0421 12:03:12.436651 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.436670 kubelet[3337]: W0421 12:03:12.436666 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.436884 kubelet[3337]: E0421 12:03:12.436682 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.437031 kubelet[3337]: E0421 12:03:12.437011 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.437031 kubelet[3337]: W0421 12:03:12.437030 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.437238 kubelet[3337]: E0421 12:03:12.437044 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.437418 kubelet[3337]: E0421 12:03:12.437306 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.437418 kubelet[3337]: W0421 12:03:12.437318 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.437418 kubelet[3337]: E0421 12:03:12.437331 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.437637 kubelet[3337]: E0421 12:03:12.437609 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.437637 kubelet[3337]: W0421 12:03:12.437620 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.437637 kubelet[3337]: E0421 12:03:12.437633 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.437857 kubelet[3337]: E0421 12:03:12.437839 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.437911 kubelet[3337]: W0421 12:03:12.437857 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.437911 kubelet[3337]: E0421 12:03:12.437870 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.438092 kubelet[3337]: E0421 12:03:12.438074 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.438092 kubelet[3337]: W0421 12:03:12.438088 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.438215 kubelet[3337]: E0421 12:03:12.438101 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.438361 kubelet[3337]: E0421 12:03:12.438343 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.438361 kubelet[3337]: W0421 12:03:12.438359 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.438480 kubelet[3337]: E0421 12:03:12.438388 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.438767 kubelet[3337]: E0421 12:03:12.438749 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.438767 kubelet[3337]: W0421 12:03:12.438764 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.438922 kubelet[3337]: E0421 12:03:12.438777 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.439009 kubelet[3337]: E0421 12:03:12.438991 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.439060 kubelet[3337]: W0421 12:03:12.439007 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.439060 kubelet[3337]: E0421 12:03:12.439020 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.439242 kubelet[3337]: E0421 12:03:12.439227 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.439242 kubelet[3337]: W0421 12:03:12.439241 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.439374 kubelet[3337]: E0421 12:03:12.439254 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.439467 kubelet[3337]: E0421 12:03:12.439452 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.439467 kubelet[3337]: W0421 12:03:12.439466 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.439663 kubelet[3337]: E0421 12:03:12.439479 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.439762 kubelet[3337]: E0421 12:03:12.439744 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.439762 kubelet[3337]: W0421 12:03:12.439759 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.439903 kubelet[3337]: E0421 12:03:12.439773 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.439988 kubelet[3337]: E0421 12:03:12.439971 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.439988 kubelet[3337]: W0421 12:03:12.439985 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.440083 kubelet[3337]: E0421 12:03:12.439998 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.440375 kubelet[3337]: E0421 12:03:12.440356 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.440375 kubelet[3337]: W0421 12:03:12.440370 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.440556 kubelet[3337]: E0421 12:03:12.440384 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.440642 kubelet[3337]: E0421 12:03:12.440625 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.440642 kubelet[3337]: W0421 12:03:12.440640 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.440842 kubelet[3337]: E0421 12:03:12.440654 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.440909 kubelet[3337]: E0421 12:03:12.440869 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.440909 kubelet[3337]: W0421 12:03:12.440880 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.440909 kubelet[3337]: E0421 12:03:12.440893 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.441124 kubelet[3337]: E0421 12:03:12.441109 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.441124 kubelet[3337]: W0421 12:03:12.441122 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.441256 kubelet[3337]: E0421 12:03:12.441134 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.441349 kubelet[3337]: E0421 12:03:12.441331 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.441349 kubelet[3337]: W0421 12:03:12.441347 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.441566 kubelet[3337]: E0421 12:03:12.441369 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.441627 kubelet[3337]: E0421 12:03:12.441591 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.441627 kubelet[3337]: W0421 12:03:12.441601 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.441627 kubelet[3337]: E0421 12:03:12.441617 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.441868 kubelet[3337]: E0421 12:03:12.441853 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.441868 kubelet[3337]: W0421 12:03:12.441866 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.441981 kubelet[3337]: E0421 12:03:12.441881 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.442135 kubelet[3337]: E0421 12:03:12.442118 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.442135 kubelet[3337]: W0421 12:03:12.442133 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.442256 kubelet[3337]: E0421 12:03:12.442147 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.442415 kubelet[3337]: E0421 12:03:12.442398 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.442415 kubelet[3337]: W0421 12:03:12.442413 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.442533 kubelet[3337]: E0421 12:03:12.442428 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.442937 kubelet[3337]: E0421 12:03:12.442882 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.442937 kubelet[3337]: W0421 12:03:12.442897 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.442937 kubelet[3337]: E0421 12:03:12.442911 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.470240 kubelet[3337]: E0421 12:03:12.470047 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:12.470240 kubelet[3337]: W0421 12:03:12.470077 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:12.470240 kubelet[3337]: E0421 12:03:12.470104 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:12.681169 containerd[1827]: time="2026-04-21T12:03:12.681121145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cr4cj,Uid:99dc552c-bf40-4024-9f89-894c827e11d3,Namespace:calico-system,Attempt:0,}" Apr 21 12:03:12.736400 containerd[1827]: time="2026-04-21T12:03:12.735584221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 12:03:12.736400 containerd[1827]: time="2026-04-21T12:03:12.735670423Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 12:03:12.736400 containerd[1827]: time="2026-04-21T12:03:12.735691723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:03:12.736400 containerd[1827]: time="2026-04-21T12:03:12.735818026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:03:12.778316 containerd[1827]: time="2026-04-21T12:03:12.778254620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cr4cj,Uid:99dc552c-bf40-4024-9f89-894c827e11d3,Namespace:calico-system,Attempt:0,} returns sandbox id \"ac0880f8688a0b6efbe67b654a490cf532ace967ae8eb08f4aeabd9d9b7290b1\"" Apr 21 12:03:13.654702 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount566774527.mount: Deactivated successfully. Apr 21 12:03:13.864000 kubelet[3337]: E0421 12:03:13.863932 3337 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4vbvh" podUID="63866721-d13a-44fd-83f4-80f75344c988" Apr 21 12:03:14.768243 containerd[1827]: time="2026-04-21T12:03:14.767239912Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:03:14.771341 containerd[1827]: time="2026-04-21T12:03:14.771219005Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 21 12:03:14.777315 containerd[1827]: time="2026-04-21T12:03:14.777244746Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:03:14.783661 containerd[1827]: time="2026-04-21T12:03:14.783586695Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:03:14.784479 containerd[1827]: time="2026-04-21T12:03:14.784309912Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.700811565s" Apr 21 12:03:14.784479 containerd[1827]: time="2026-04-21T12:03:14.784352213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 21 12:03:14.785942 containerd[1827]: time="2026-04-21T12:03:14.785697244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 21 12:03:14.811031 containerd[1827]: time="2026-04-21T12:03:14.810979536Z" level=info msg="CreateContainer within sandbox \"743e890955b29c9e7a0bb155d18ea502e2642e84e9ed4cc664c0923088d1a5f2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 21 12:03:14.861895 containerd[1827]: time="2026-04-21T12:03:14.861835928Z" level=info msg="CreateContainer within sandbox \"743e890955b29c9e7a0bb155d18ea502e2642e84e9ed4cc664c0923088d1a5f2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b82e607167a6877e272fec40a5b943f0ae72f3f7e3246ef663b9f175f4453ebf\"" Apr 21 12:03:14.863503 containerd[1827]: time="2026-04-21T12:03:14.862650447Z" level=info msg="StartContainer for \"b82e607167a6877e272fec40a5b943f0ae72f3f7e3246ef663b9f175f4453ebf\"" Apr 21 12:03:14.952476 containerd[1827]: time="2026-04-21T12:03:14.952388049Z" level=info msg="StartContainer for \"b82e607167a6877e272fec40a5b943f0ae72f3f7e3246ef663b9f175f4453ebf\" returns successfully" Apr 21 12:03:15.026200 kubelet[3337]: E0421 12:03:15.026032 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:15.026200 kubelet[3337]: W0421 12:03:15.026069 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:15.026200 kubelet[3337]: E0421 12:03:15.026100 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:15.027084 kubelet[3337]: E0421 12:03:15.026909 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:15.027084 kubelet[3337]: W0421 12:03:15.026931 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:15.027084 kubelet[3337]: E0421 12:03:15.026954 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:15.027403 kubelet[3337]: E0421 12:03:15.027196 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:15.027403 kubelet[3337]: W0421 12:03:15.027207 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:15.027403 kubelet[3337]: E0421 12:03:15.027219 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:15.027675 kubelet[3337]: E0421 12:03:15.027527 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:15.027675 kubelet[3337]: W0421 12:03:15.027539 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:15.027675 kubelet[3337]: E0421 12:03:15.027554 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:15.027859 kubelet[3337]: E0421 12:03:15.027837 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:15.027859 kubelet[3337]: W0421 12:03:15.027849 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:15.027983 kubelet[3337]: E0421 12:03:15.027880 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:15.028147 kubelet[3337]: E0421 12:03:15.028133 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:15.028147 kubelet[3337]: W0421 12:03:15.028147 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:15.028265 kubelet[3337]: E0421 12:03:15.028160 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:15.028520 kubelet[3337]: E0421 12:03:15.028479 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:15.028607 kubelet[3337]: W0421 12:03:15.028522 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:15.028607 kubelet[3337]: E0421 12:03:15.028538 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:15.028840 kubelet[3337]: E0421 12:03:15.028820 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:15.028912 kubelet[3337]: W0421 12:03:15.028855 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:15.028912 kubelet[3337]: E0421 12:03:15.028869 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:15.029120 kubelet[3337]: E0421 12:03:15.029094 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:15.029120 kubelet[3337]: W0421 12:03:15.029110 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:15.029250 kubelet[3337]: E0421 12:03:15.029123 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:15.029340 kubelet[3337]: E0421 12:03:15.029322 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:15.029411 kubelet[3337]: W0421 12:03:15.029339 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:15.029411 kubelet[3337]: E0421 12:03:15.029352 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:15.029686 kubelet[3337]: E0421 12:03:15.029670 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:15.029737 kubelet[3337]: W0421 12:03:15.029692 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:15.029737 kubelet[3337]: E0421 12:03:15.029707 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:15.029947 kubelet[3337]: E0421 12:03:15.029936 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:15.030115 kubelet[3337]: W0421 12:03:15.030009 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:15.030115 kubelet[3337]: E0421 12:03:15.030026 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:15.030288 kubelet[3337]: E0421 12:03:15.030277 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:15.030449 kubelet[3337]: W0421 12:03:15.030341 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:15.030449 kubelet[3337]: E0421 12:03:15.030354 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:15.030687 kubelet[3337]: E0421 12:03:15.030676 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:15.030738 kubelet[3337]: W0421 12:03:15.030730 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:15.030789 kubelet[3337]: E0421 12:03:15.030781 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:15.031006 kubelet[3337]: E0421 12:03:15.030995 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:15.031091 kubelet[3337]: W0421 12:03:15.031073 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:15.031136 kubelet[3337]: E0421 12:03:15.031096 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:15.054098 kubelet[3337]: E0421 12:03:15.054060 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:15.054098 kubelet[3337]: W0421 12:03:15.054088 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:15.054098 kubelet[3337]: E0421 12:03:15.054118 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:15.054673 kubelet[3337]: E0421 12:03:15.054650 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:15.054673 kubelet[3337]: W0421 12:03:15.054669 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:15.054789 kubelet[3337]: E0421 12:03:15.054690 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:15.055004 kubelet[3337]: E0421 12:03:15.054985 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:15.055004 kubelet[3337]: W0421 12:03:15.055000 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:15.055132 kubelet[3337]: E0421 12:03:15.055016 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:15.055433 kubelet[3337]: E0421 12:03:15.055314 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:15.055433 kubelet[3337]: W0421 12:03:15.055330 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:15.055433 kubelet[3337]: E0421 12:03:15.055345 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:15.055679 kubelet[3337]: E0421 12:03:15.055601 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:15.055679 kubelet[3337]: W0421 12:03:15.055612 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:15.055679 kubelet[3337]: E0421 12:03:15.055627 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:15.055908 kubelet[3337]: E0421 12:03:15.055893 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:15.055908 kubelet[3337]: W0421 12:03:15.055905 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:15.056003 kubelet[3337]: E0421 12:03:15.055919 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:15.056190 kubelet[3337]: E0421 12:03:15.056172 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:15.056190 kubelet[3337]: W0421 12:03:15.056187 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:15.056444 kubelet[3337]: E0421 12:03:15.056201 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:15.056444 kubelet[3337]: E0421 12:03:15.056412 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:15.056444 kubelet[3337]: W0421 12:03:15.056424 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:15.056444 kubelet[3337]: E0421 12:03:15.056436 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:15.056737 kubelet[3337]: E0421 12:03:15.056706 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:15.056737 kubelet[3337]: W0421 12:03:15.056718 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:15.056737 kubelet[3337]: E0421 12:03:15.056731 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:15.057197 kubelet[3337]: E0421 12:03:15.057179 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:15.057282 kubelet[3337]: W0421 12:03:15.057233 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:15.057282 kubelet[3337]: E0421 12:03:15.057250 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:15.057488 kubelet[3337]: E0421 12:03:15.057471 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:15.057585 kubelet[3337]: W0421 12:03:15.057486 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:15.057585 kubelet[3337]: E0421 12:03:15.057541 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:15.057777 kubelet[3337]: E0421 12:03:15.057757 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:15.057777 kubelet[3337]: W0421 12:03:15.057775 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:15.057911 kubelet[3337]: E0421 12:03:15.057793 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:15.058016 kubelet[3337]: E0421 12:03:15.057997 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:15.058016 kubelet[3337]: W0421 12:03:15.058013 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:15.058111 kubelet[3337]: E0421 12:03:15.058026 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:15.058315 kubelet[3337]: E0421 12:03:15.058297 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:15.058315 kubelet[3337]: W0421 12:03:15.058312 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:15.058414 kubelet[3337]: E0421 12:03:15.058325 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:15.058741 kubelet[3337]: E0421 12:03:15.058723 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:15.058741 kubelet[3337]: W0421 12:03:15.058740 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:15.058862 kubelet[3337]: E0421 12:03:15.058753 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:15.059023 kubelet[3337]: E0421 12:03:15.059003 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:15.059023 kubelet[3337]: W0421 12:03:15.059021 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:15.059128 kubelet[3337]: E0421 12:03:15.059035 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:15.059398 kubelet[3337]: E0421 12:03:15.059381 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:15.059398 kubelet[3337]: W0421 12:03:15.059396 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:15.059643 kubelet[3337]: E0421 12:03:15.059410 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:15.059726 kubelet[3337]: E0421 12:03:15.059707 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:15.059777 kubelet[3337]: W0421 12:03:15.059725 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:15.059777 kubelet[3337]: E0421 12:03:15.059743 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:15.863946 kubelet[3337]: E0421 12:03:15.862917 3337 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4vbvh" podUID="63866721-d13a-44fd-83f4-80f75344c988" Apr 21 12:03:15.974346 kubelet[3337]: I0421 12:03:15.973756 3337 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 12:03:16.038039 kubelet[3337]: E0421 12:03:16.037995 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:16.038039 kubelet[3337]: W0421 12:03:16.038029 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:16.038954 kubelet[3337]: E0421 12:03:16.038059 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:16.038954 kubelet[3337]: E0421 12:03:16.038335 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:16.038954 kubelet[3337]: W0421 12:03:16.038351 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:16.038954 kubelet[3337]: E0421 12:03:16.038371 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:16.038954 kubelet[3337]: E0421 12:03:16.038628 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:16.038954 kubelet[3337]: W0421 12:03:16.038642 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:16.038954 kubelet[3337]: E0421 12:03:16.038659 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:16.038954 kubelet[3337]: E0421 12:03:16.038884 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:16.038954 kubelet[3337]: W0421 12:03:16.038899 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:16.038954 kubelet[3337]: E0421 12:03:16.038913 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:16.039541 kubelet[3337]: E0421 12:03:16.039148 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:16.039541 kubelet[3337]: W0421 12:03:16.039160 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:16.039541 kubelet[3337]: E0421 12:03:16.039173 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:16.039541 kubelet[3337]: E0421 12:03:16.039385 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:16.039541 kubelet[3337]: W0421 12:03:16.039396 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:16.039541 kubelet[3337]: E0421 12:03:16.039408 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:16.039901 kubelet[3337]: E0421 12:03:16.039618 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:16.039901 kubelet[3337]: W0421 12:03:16.039627 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:16.039901 kubelet[3337]: E0421 12:03:16.039639 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:16.039901 kubelet[3337]: E0421 12:03:16.039849 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:16.039901 kubelet[3337]: W0421 12:03:16.039865 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:16.039901 kubelet[3337]: E0421 12:03:16.039879 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:16.040267 kubelet[3337]: E0421 12:03:16.040097 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:16.040267 kubelet[3337]: W0421 12:03:16.040108 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:16.040267 kubelet[3337]: E0421 12:03:16.040121 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:16.040421 kubelet[3337]: E0421 12:03:16.040318 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:16.040421 kubelet[3337]: W0421 12:03:16.040328 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:16.040421 kubelet[3337]: E0421 12:03:16.040340 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:16.040624 kubelet[3337]: E0421 12:03:16.040546 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:16.040624 kubelet[3337]: W0421 12:03:16.040557 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:16.040624 kubelet[3337]: E0421 12:03:16.040568 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:16.040807 kubelet[3337]: E0421 12:03:16.040772 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:16.040807 kubelet[3337]: W0421 12:03:16.040785 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:16.040807 kubelet[3337]: E0421 12:03:16.040796 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:16.041013 kubelet[3337]: E0421 12:03:16.041001 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:16.041013 kubelet[3337]: W0421 12:03:16.041012 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:16.041114 kubelet[3337]: E0421 12:03:16.041024 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:16.041303 kubelet[3337]: E0421 12:03:16.041282 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:16.041303 kubelet[3337]: W0421 12:03:16.041299 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:16.041451 kubelet[3337]: E0421 12:03:16.041314 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:16.041624 kubelet[3337]: E0421 12:03:16.041607 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:16.041624 kubelet[3337]: W0421 12:03:16.041622 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:16.041733 kubelet[3337]: E0421 12:03:16.041636 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:16.061301 kubelet[3337]: E0421 12:03:16.061264 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:16.061301 kubelet[3337]: W0421 12:03:16.061292 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:16.061607 kubelet[3337]: E0421 12:03:16.061320 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:16.061699 kubelet[3337]: E0421 12:03:16.061677 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:16.061762 kubelet[3337]: W0421 12:03:16.061697 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:16.061762 kubelet[3337]: E0421 12:03:16.061715 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:16.062021 kubelet[3337]: E0421 12:03:16.061999 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:16.062021 kubelet[3337]: W0421 12:03:16.062016 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:16.062187 kubelet[3337]: E0421 12:03:16.062030 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:16.062354 kubelet[3337]: E0421 12:03:16.062335 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:16.062354 kubelet[3337]: W0421 12:03:16.062353 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:16.062479 kubelet[3337]: E0421 12:03:16.062369 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:16.062645 kubelet[3337]: E0421 12:03:16.062627 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:16.062645 kubelet[3337]: W0421 12:03:16.062642 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:16.062779 kubelet[3337]: E0421 12:03:16.062656 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:16.062911 kubelet[3337]: E0421 12:03:16.062895 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:16.062911 kubelet[3337]: W0421 12:03:16.062909 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:16.063012 kubelet[3337]: E0421 12:03:16.062922 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:16.063184 kubelet[3337]: E0421 12:03:16.063166 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:16.063184 kubelet[3337]: W0421 12:03:16.063181 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:16.063339 kubelet[3337]: E0421 12:03:16.063195 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:16.063476 kubelet[3337]: E0421 12:03:16.063459 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:16.063476 kubelet[3337]: W0421 12:03:16.063474 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:16.063606 kubelet[3337]: E0421 12:03:16.063488 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:16.063779 kubelet[3337]: E0421 12:03:16.063761 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:16.063779 kubelet[3337]: W0421 12:03:16.063777 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:16.063883 kubelet[3337]: E0421 12:03:16.063791 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:16.064176 kubelet[3337]: E0421 12:03:16.064153 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:16.064176 kubelet[3337]: W0421 12:03:16.064171 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:16.064332 kubelet[3337]: E0421 12:03:16.064185 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:16.064397 kubelet[3337]: E0421 12:03:16.064388 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:16.064458 kubelet[3337]: W0421 12:03:16.064399 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:16.064458 kubelet[3337]: E0421 12:03:16.064412 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:16.064688 kubelet[3337]: E0421 12:03:16.064672 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:16.064688 kubelet[3337]: W0421 12:03:16.064686 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:16.064780 kubelet[3337]: E0421 12:03:16.064700 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:16.065072 kubelet[3337]: E0421 12:03:16.065054 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:16.065072 kubelet[3337]: W0421 12:03:16.065069 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:16.065210 kubelet[3337]: E0421 12:03:16.065086 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:16.065310 kubelet[3337]: E0421 12:03:16.065302 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:16.065363 kubelet[3337]: W0421 12:03:16.065313 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:16.065363 kubelet[3337]: E0421 12:03:16.065326 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:16.065600 kubelet[3337]: E0421 12:03:16.065582 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:16.065600 kubelet[3337]: W0421 12:03:16.065597 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:16.065722 kubelet[3337]: E0421 12:03:16.065611 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:16.065853 kubelet[3337]: E0421 12:03:16.065836 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:16.065853 kubelet[3337]: W0421 12:03:16.065851 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:16.065965 kubelet[3337]: E0421 12:03:16.065864 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:16.066124 kubelet[3337]: E0421 12:03:16.066107 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:16.066124 kubelet[3337]: W0421 12:03:16.066121 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:16.066243 kubelet[3337]: E0421 12:03:16.066135 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:16.066733 kubelet[3337]: E0421 12:03:16.066716 3337 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:16.066733 kubelet[3337]: W0421 12:03:16.066733 3337 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:16.066826 kubelet[3337]: E0421 12:03:16.066747 3337 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:16.548278 containerd[1827]: time="2026-04-21T12:03:16.548217552Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:03:16.550808 containerd[1827]: time="2026-04-21T12:03:16.550732910Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 21 12:03:16.554416 containerd[1827]: time="2026-04-21T12:03:16.554344893Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:03:16.560366 containerd[1827]: time="2026-04-21T12:03:16.560313631Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:03:16.561192 containerd[1827]: time="2026-04-21T12:03:16.561022548Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.775287402s" Apr 21 12:03:16.561192 containerd[1827]: time="2026-04-21T12:03:16.561069249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 21 12:03:16.570245 containerd[1827]: time="2026-04-21T12:03:16.570203060Z" level=info msg="CreateContainer within sandbox \"ac0880f8688a0b6efbe67b654a490cf532ace967ae8eb08f4aeabd9d9b7290b1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 21 12:03:16.617737 containerd[1827]: time="2026-04-21T12:03:16.617684756Z" level=info msg="CreateContainer within sandbox \"ac0880f8688a0b6efbe67b654a490cf532ace967ae8eb08f4aeabd9d9b7290b1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"73bcbd010b1989b1521183dc480e3c491d981323ef1ba5e7f88d171d472040b6\"" Apr 21 12:03:16.618643 containerd[1827]: time="2026-04-21T12:03:16.618562076Z" level=info msg="StartContainer for \"73bcbd010b1989b1521183dc480e3c491d981323ef1ba5e7f88d171d472040b6\"" Apr 21 12:03:16.657334 systemd[1]: run-containerd-runc-k8s.io-73bcbd010b1989b1521183dc480e3c491d981323ef1ba5e7f88d171d472040b6-runc.MQVCIc.mount: Deactivated successfully. Apr 21 12:03:16.696079 containerd[1827]: time="2026-04-21T12:03:16.695901862Z" level=info msg="StartContainer for \"73bcbd010b1989b1521183dc480e3c491d981323ef1ba5e7f88d171d472040b6\" returns successfully" Apr 21 12:03:16.731671 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73bcbd010b1989b1521183dc480e3c491d981323ef1ba5e7f88d171d472040b6-rootfs.mount: Deactivated successfully. Apr 21 12:03:17.005441 kubelet[3337]: I0421 12:03:17.002799 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-656cbb6c5-bvdvs" podStartSLOduration=3.300382546 podStartE2EDuration="6.002774848s" podCreationTimestamp="2026-04-21 12:03:11 +0000 UTC" firstStartedPulling="2026-04-21 12:03:12.083094937 +0000 UTC m=+24.330008587" lastFinishedPulling="2026-04-21 12:03:14.785487239 +0000 UTC m=+27.032400889" observedRunningTime="2026-04-21 12:03:14.991650369 +0000 UTC m=+27.238564019" watchObservedRunningTime="2026-04-21 12:03:17.002774848 +0000 UTC m=+29.249688498" Apr 21 12:03:17.864145 kubelet[3337]: E0421 12:03:17.863664 3337 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4vbvh" podUID="63866721-d13a-44fd-83f4-80f75344c988" Apr 21 12:03:18.618222 containerd[1827]: time="2026-04-21T12:03:18.014250805Z" level=error msg="collecting metrics for 73bcbd010b1989b1521183dc480e3c491d981323ef1ba5e7f88d171d472040b6" error="cgroups: cgroup deleted: unknown" Apr 21 12:03:18.679156 containerd[1827]: time="2026-04-21T12:03:18.679063856Z" level=info msg="shim disconnected" id=73bcbd010b1989b1521183dc480e3c491d981323ef1ba5e7f88d171d472040b6 namespace=k8s.io Apr 21 12:03:18.679156 containerd[1827]: time="2026-04-21T12:03:18.679146358Z" level=warning msg="cleaning up after shim disconnected" id=73bcbd010b1989b1521183dc480e3c491d981323ef1ba5e7f88d171d472040b6 namespace=k8s.io Apr 21 12:03:18.679156 containerd[1827]: time="2026-04-21T12:03:18.679159558Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 12:03:18.988374 containerd[1827]: time="2026-04-21T12:03:18.988063991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 21 12:03:19.863329 kubelet[3337]: E0421 12:03:19.862688 3337 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4vbvh" podUID="63866721-d13a-44fd-83f4-80f75344c988" Apr 21 12:03:21.863695 kubelet[3337]: E0421 12:03:21.863095 3337 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4vbvh" podUID="63866721-d13a-44fd-83f4-80f75344c988" Apr 21 12:03:23.864333 kubelet[3337]: E0421 12:03:23.863404 3337 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4vbvh" podUID="63866721-d13a-44fd-83f4-80f75344c988" Apr 21 12:03:25.864605 kubelet[3337]: E0421 12:03:25.863901 3337 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4vbvh" podUID="63866721-d13a-44fd-83f4-80f75344c988" Apr 21 12:03:27.058119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4159242976.mount: Deactivated successfully. Apr 21 12:03:27.107769 containerd[1827]: time="2026-04-21T12:03:27.107708929Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:03:27.110984 containerd[1827]: time="2026-04-21T12:03:27.110917204Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 21 12:03:27.114087 containerd[1827]: time="2026-04-21T12:03:27.114030377Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:03:27.118910 containerd[1827]: time="2026-04-21T12:03:27.118845589Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:03:27.120923 containerd[1827]: time="2026-04-21T12:03:27.120165619Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 8.132053027s" Apr 21 12:03:27.120923 containerd[1827]: time="2026-04-21T12:03:27.120206920Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 21 12:03:27.129254 containerd[1827]: time="2026-04-21T12:03:27.129218130Z" level=info msg="CreateContainer within sandbox \"ac0880f8688a0b6efbe67b654a490cf532ace967ae8eb08f4aeabd9d9b7290b1\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 21 12:03:27.168791 containerd[1827]: time="2026-04-21T12:03:27.168726850Z" level=info msg="CreateContainer within sandbox \"ac0880f8688a0b6efbe67b654a490cf532ace967ae8eb08f4aeabd9d9b7290b1\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"fb80adbd2624afa92f65797153f7c2a14e7eb4570a56e2e4c1e266cb8eb32816\"" Apr 21 12:03:27.169529 containerd[1827]: time="2026-04-21T12:03:27.169420966Z" level=info msg="StartContainer for \"fb80adbd2624afa92f65797153f7c2a14e7eb4570a56e2e4c1e266cb8eb32816\"" Apr 21 12:03:27.236912 containerd[1827]: time="2026-04-21T12:03:27.236848236Z" level=info msg="StartContainer for \"fb80adbd2624afa92f65797153f7c2a14e7eb4570a56e2e4c1e266cb8eb32816\" returns successfully" Apr 21 12:03:27.864171 kubelet[3337]: E0421 12:03:27.863735 3337 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4vbvh" podUID="63866721-d13a-44fd-83f4-80f75344c988" Apr 21 12:03:28.063175 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb80adbd2624afa92f65797153f7c2a14e7eb4570a56e2e4c1e266cb8eb32816-rootfs.mount: Deactivated successfully. Apr 21 12:03:28.672329 containerd[1827]: time="2026-04-21T12:03:28.672280755Z" level=error msg="collecting metrics for fb80adbd2624afa92f65797153f7c2a14e7eb4570a56e2e4c1e266cb8eb32816" error="cgroups: cgroup deleted: unknown" Apr 21 12:03:29.863810 kubelet[3337]: E0421 12:03:29.863059 3337 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4vbvh" podUID="63866721-d13a-44fd-83f4-80f75344c988" Apr 21 12:03:30.574072 containerd[1827]: time="2026-04-21T12:03:30.573992130Z" level=info msg="shim disconnected" id=fb80adbd2624afa92f65797153f7c2a14e7eb4570a56e2e4c1e266cb8eb32816 namespace=k8s.io Apr 21 12:03:30.574072 containerd[1827]: time="2026-04-21T12:03:30.574064832Z" level=warning msg="cleaning up after shim disconnected" id=fb80adbd2624afa92f65797153f7c2a14e7eb4570a56e2e4c1e266cb8eb32816 namespace=k8s.io Apr 21 12:03:30.574072 containerd[1827]: time="2026-04-21T12:03:30.574079832Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 12:03:31.016269 containerd[1827]: time="2026-04-21T12:03:31.015674614Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 21 12:03:31.863791 kubelet[3337]: E0421 12:03:31.863264 3337 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4vbvh" podUID="63866721-d13a-44fd-83f4-80f75344c988" Apr 21 12:03:33.242163 kubelet[3337]: I0421 12:03:33.241750 3337 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 12:03:33.863606 kubelet[3337]: E0421 12:03:33.863441 3337 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4vbvh" podUID="63866721-d13a-44fd-83f4-80f75344c988" Apr 21 12:03:35.322813 containerd[1827]: time="2026-04-21T12:03:35.322745457Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:03:35.325516 containerd[1827]: time="2026-04-21T12:03:35.325443617Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 21 12:03:35.330184 containerd[1827]: time="2026-04-21T12:03:35.330119922Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:03:35.339912 containerd[1827]: time="2026-04-21T12:03:35.339834840Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:03:35.340793 containerd[1827]: time="2026-04-21T12:03:35.340614458Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 4.324891643s" Apr 21 12:03:35.340793 containerd[1827]: time="2026-04-21T12:03:35.340657459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 21 12:03:35.349674 containerd[1827]: time="2026-04-21T12:03:35.349629160Z" level=info msg="CreateContainer within sandbox \"ac0880f8688a0b6efbe67b654a490cf532ace967ae8eb08f4aeabd9d9b7290b1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 21 12:03:35.388370 containerd[1827]: time="2026-04-21T12:03:35.388301728Z" level=info msg="CreateContainer within sandbox \"ac0880f8688a0b6efbe67b654a490cf532ace967ae8eb08f4aeabd9d9b7290b1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"452222833348a63a99563b6ac44c9ad24cb810def4ddf7167aefdd8496119bb4\"" Apr 21 12:03:35.389067 containerd[1827]: time="2026-04-21T12:03:35.388970643Z" level=info msg="StartContainer for \"452222833348a63a99563b6ac44c9ad24cb810def4ddf7167aefdd8496119bb4\"" Apr 21 12:03:35.457639 containerd[1827]: time="2026-04-21T12:03:35.457216474Z" level=info msg="StartContainer for \"452222833348a63a99563b6ac44c9ad24cb810def4ddf7167aefdd8496119bb4\" returns successfully" Apr 21 12:03:35.863919 kubelet[3337]: E0421 12:03:35.862994 3337 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4vbvh" podUID="63866721-d13a-44fd-83f4-80f75344c988" Apr 21 12:03:37.170635 containerd[1827]: time="2026-04-21T12:03:37.170118706Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 21 12:03:37.182553 kubelet[3337]: I0421 12:03:37.181806 3337 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 21 12:03:37.209116 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-452222833348a63a99563b6ac44c9ad24cb810def4ddf7167aefdd8496119bb4-rootfs.mount: Deactivated successfully. Apr 21 12:03:38.436839 containerd[1827]: time="2026-04-21T12:03:38.436765826Z" level=info msg="shim disconnected" id=452222833348a63a99563b6ac44c9ad24cb810def4ddf7167aefdd8496119bb4 namespace=k8s.io Apr 21 12:03:38.436839 containerd[1827]: time="2026-04-21T12:03:38.436837427Z" level=warning msg="cleaning up after shim disconnected" id=452222833348a63a99563b6ac44c9ad24cb810def4ddf7167aefdd8496119bb4 namespace=k8s.io Apr 21 12:03:38.436839 containerd[1827]: time="2026-04-21T12:03:38.436848628Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 12:03:38.495464 containerd[1827]: time="2026-04-21T12:03:38.494956331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4vbvh,Uid:63866721-d13a-44fd-83f4-80f75344c988,Namespace:calico-system,Attempt:0,}" Apr 21 12:03:38.524589 kubelet[3337]: I0421 12:03:38.524541 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7h9r\" (UniqueName: \"kubernetes.io/projected/bc6d433d-f787-4f7f-b79e-79db252ad0a3-kube-api-access-g7h9r\") pod \"goldmane-5b85766d88-xldpb\" (UID: \"bc6d433d-f787-4f7f-b79e-79db252ad0a3\") " pod="calico-system/goldmane-5b85766d88-xldpb" Apr 21 12:03:38.525930 kubelet[3337]: I0421 12:03:38.525323 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvstj\" (UniqueName: \"kubernetes.io/projected/ee7802c0-cf49-4f20-9d8b-620165e7dc87-kube-api-access-zvstj\") pod \"calico-apiserver-6d67f5f978-9429z\" (UID: \"ee7802c0-cf49-4f20-9d8b-620165e7dc87\") " pod="calico-system/calico-apiserver-6d67f5f978-9429z" Apr 21 12:03:38.525930 kubelet[3337]: I0421 12:03:38.525403 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcr7l\" (UniqueName: \"kubernetes.io/projected/7be57639-09c2-4414-a7be-12502cbf1c2c-kube-api-access-qcr7l\") pod \"coredns-674b8bbfcf-nkbzx\" (UID: \"7be57639-09c2-4414-a7be-12502cbf1c2c\") " pod="kube-system/coredns-674b8bbfcf-nkbzx" Apr 21 12:03:38.525930 kubelet[3337]: I0421 12:03:38.525477 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltthj\" (UniqueName: \"kubernetes.io/projected/99e71ca6-4515-43f1-9773-fcdfc5a40507-kube-api-access-ltthj\") pod \"coredns-674b8bbfcf-6mm7f\" (UID: \"99e71ca6-4515-43f1-9773-fcdfc5a40507\") " pod="kube-system/coredns-674b8bbfcf-6mm7f" Apr 21 12:03:38.525930 kubelet[3337]: I0421 12:03:38.525546 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/bc6d433d-f787-4f7f-b79e-79db252ad0a3-goldmane-key-pair\") pod \"goldmane-5b85766d88-xldpb\" (UID: \"bc6d433d-f787-4f7f-b79e-79db252ad0a3\") " pod="calico-system/goldmane-5b85766d88-xldpb" Apr 21 12:03:38.525930 kubelet[3337]: I0421 12:03:38.525610 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7be57639-09c2-4414-a7be-12502cbf1c2c-config-volume\") pod \"coredns-674b8bbfcf-nkbzx\" (UID: \"7be57639-09c2-4414-a7be-12502cbf1c2c\") " pod="kube-system/coredns-674b8bbfcf-nkbzx" Apr 21 12:03:38.526825 kubelet[3337]: I0421 12:03:38.525646 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9ddf0e0e-ea69-4d09-96e5-101c57c477a6-calico-apiserver-certs\") pod \"calico-apiserver-6d67f5f978-xpcf7\" (UID: \"9ddf0e0e-ea69-4d09-96e5-101c57c477a6\") " pod="calico-system/calico-apiserver-6d67f5f978-xpcf7" Apr 21 12:03:38.526825 kubelet[3337]: I0421 12:03:38.525722 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/99e71ca6-4515-43f1-9773-fcdfc5a40507-config-volume\") pod \"coredns-674b8bbfcf-6mm7f\" (UID: \"99e71ca6-4515-43f1-9773-fcdfc5a40507\") " pod="kube-system/coredns-674b8bbfcf-6mm7f" Apr 21 12:03:38.526825 kubelet[3337]: I0421 12:03:38.525785 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc6d433d-f787-4f7f-b79e-79db252ad0a3-config\") pod \"goldmane-5b85766d88-xldpb\" (UID: \"bc6d433d-f787-4f7f-b79e-79db252ad0a3\") " pod="calico-system/goldmane-5b85766d88-xldpb" Apr 21 12:03:38.526825 kubelet[3337]: I0421 12:03:38.525903 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc6d433d-f787-4f7f-b79e-79db252ad0a3-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-xldpb\" (UID: \"bc6d433d-f787-4f7f-b79e-79db252ad0a3\") " pod="calico-system/goldmane-5b85766d88-xldpb" Apr 21 12:03:38.526825 kubelet[3337]: I0421 12:03:38.526203 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ba14df3-ce7b-4843-8cfc-82850053c280-tigera-ca-bundle\") pod \"calico-kube-controllers-5fb546f67b-nbfw7\" (UID: \"5ba14df3-ce7b-4843-8cfc-82850053c280\") " pod="calico-system/calico-kube-controllers-5fb546f67b-nbfw7" Apr 21 12:03:38.527416 kubelet[3337]: I0421 12:03:38.526499 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4zbz\" (UniqueName: \"kubernetes.io/projected/5ba14df3-ce7b-4843-8cfc-82850053c280-kube-api-access-p4zbz\") pod \"calico-kube-controllers-5fb546f67b-nbfw7\" (UID: \"5ba14df3-ce7b-4843-8cfc-82850053c280\") " pod="calico-system/calico-kube-controllers-5fb546f67b-nbfw7" Apr 21 12:03:38.527416 kubelet[3337]: I0421 12:03:38.526610 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkt5d\" (UniqueName: \"kubernetes.io/projected/9ddf0e0e-ea69-4d09-96e5-101c57c477a6-kube-api-access-tkt5d\") pod \"calico-apiserver-6d67f5f978-xpcf7\" (UID: \"9ddf0e0e-ea69-4d09-96e5-101c57c477a6\") " pod="calico-system/calico-apiserver-6d67f5f978-xpcf7" Apr 21 12:03:38.527416 kubelet[3337]: I0421 12:03:38.526945 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ee7802c0-cf49-4f20-9d8b-620165e7dc87-calico-apiserver-certs\") pod \"calico-apiserver-6d67f5f978-9429z\" (UID: \"ee7802c0-cf49-4f20-9d8b-620165e7dc87\") " pod="calico-system/calico-apiserver-6d67f5f978-9429z" Apr 21 12:03:38.614558 containerd[1827]: time="2026-04-21T12:03:38.614485913Z" level=error msg="Failed to destroy network for sandbox \"60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:03:38.614910 containerd[1827]: time="2026-04-21T12:03:38.614876222Z" level=error msg="encountered an error cleaning up failed sandbox \"60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:03:38.615022 containerd[1827]: time="2026-04-21T12:03:38.614937623Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4vbvh,Uid:63866721-d13a-44fd-83f4-80f75344c988,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:03:38.615248 kubelet[3337]: E0421 12:03:38.615189 3337 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:03:38.615504 kubelet[3337]: E0421 12:03:38.615375 3337 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4vbvh" Apr 21 12:03:38.615658 kubelet[3337]: E0421 12:03:38.615485 3337 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4vbvh" Apr 21 12:03:38.617626 kubelet[3337]: E0421 12:03:38.617549 3337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4vbvh_calico-system(63866721-d13a-44fd-83f4-80f75344c988)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4vbvh_calico-system(63866721-d13a-44fd-83f4-80f75344c988)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4vbvh" podUID="63866721-d13a-44fd-83f4-80f75344c988" Apr 21 12:03:38.622145 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96-shm.mount: Deactivated successfully. Apr 21 12:03:38.628210 kubelet[3337]: I0421 12:03:38.628170 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/cc7c9377-a25d-48d2-aa0f-217d6f89092f-nginx-config\") pod \"whisker-5668cfd99f-v5nbx\" (UID: \"cc7c9377-a25d-48d2-aa0f-217d6f89092f\") " pod="calico-system/whisker-5668cfd99f-v5nbx" Apr 21 12:03:38.628322 kubelet[3337]: I0421 12:03:38.628239 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fblcg\" (UniqueName: \"kubernetes.io/projected/cc7c9377-a25d-48d2-aa0f-217d6f89092f-kube-api-access-fblcg\") pod \"whisker-5668cfd99f-v5nbx\" (UID: \"cc7c9377-a25d-48d2-aa0f-217d6f89092f\") " pod="calico-system/whisker-5668cfd99f-v5nbx" Apr 21 12:03:38.630514 kubelet[3337]: I0421 12:03:38.628389 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cc7c9377-a25d-48d2-aa0f-217d6f89092f-whisker-backend-key-pair\") pod \"whisker-5668cfd99f-v5nbx\" (UID: \"cc7c9377-a25d-48d2-aa0f-217d6f89092f\") " pod="calico-system/whisker-5668cfd99f-v5nbx" Apr 21 12:03:38.630514 kubelet[3337]: I0421 12:03:38.628485 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc7c9377-a25d-48d2-aa0f-217d6f89092f-whisker-ca-bundle\") pod \"whisker-5668cfd99f-v5nbx\" (UID: \"cc7c9377-a25d-48d2-aa0f-217d6f89092f\") " pod="calico-system/whisker-5668cfd99f-v5nbx" Apr 21 12:03:38.735297 containerd[1827]: time="2026-04-21T12:03:38.735167321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6mm7f,Uid:99e71ca6-4515-43f1-9773-fcdfc5a40507,Namespace:kube-system,Attempt:0,}" Apr 21 12:03:38.759641 containerd[1827]: time="2026-04-21T12:03:38.759595669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d67f5f978-9429z,Uid:ee7802c0-cf49-4f20-9d8b-620165e7dc87,Namespace:calico-system,Attempt:0,}" Apr 21 12:03:38.766908 containerd[1827]: time="2026-04-21T12:03:38.766862532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d67f5f978-xpcf7,Uid:9ddf0e0e-ea69-4d09-96e5-101c57c477a6,Namespace:calico-system,Attempt:0,}" Apr 21 12:03:38.770685 containerd[1827]: time="2026-04-21T12:03:38.770637617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fb546f67b-nbfw7,Uid:5ba14df3-ce7b-4843-8cfc-82850053c280,Namespace:calico-system,Attempt:0,}" Apr 21 12:03:38.795390 containerd[1827]: time="2026-04-21T12:03:38.795013664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-xldpb,Uid:bc6d433d-f787-4f7f-b79e-79db252ad0a3,Namespace:calico-system,Attempt:0,}" Apr 21 12:03:38.815243 containerd[1827]: time="2026-04-21T12:03:38.815197417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nkbzx,Uid:7be57639-09c2-4414-a7be-12502cbf1c2c,Namespace:kube-system,Attempt:0,}" Apr 21 12:03:38.815576 containerd[1827]: time="2026-04-21T12:03:38.815545224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5668cfd99f-v5nbx,Uid:cc7c9377-a25d-48d2-aa0f-217d6f89092f,Namespace:calico-system,Attempt:0,}" Apr 21 12:03:38.881362 containerd[1827]: time="2026-04-21T12:03:38.881241598Z" level=error msg="Failed to destroy network for sandbox \"856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:03:38.882995 containerd[1827]: time="2026-04-21T12:03:38.882767133Z" level=error msg="encountered an error cleaning up failed sandbox \"856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:03:38.882995 containerd[1827]: time="2026-04-21T12:03:38.882864835Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6mm7f,Uid:99e71ca6-4515-43f1-9773-fcdfc5a40507,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:03:38.883267 kubelet[3337]: E0421 12:03:38.883148 3337 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:03:38.883267 kubelet[3337]: E0421 12:03:38.883230 3337 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6mm7f" Apr 21 12:03:38.883533 kubelet[3337]: E0421 12:03:38.883270 3337 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6mm7f" Apr 21 12:03:38.883533 kubelet[3337]: E0421 12:03:38.883343 3337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-6mm7f_kube-system(99e71ca6-4515-43f1-9773-fcdfc5a40507)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-6mm7f_kube-system(99e71ca6-4515-43f1-9773-fcdfc5a40507)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6mm7f" podUID="99e71ca6-4515-43f1-9773-fcdfc5a40507" Apr 21 12:03:39.045627 kubelet[3337]: I0421 12:03:39.044581 3337 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96" Apr 21 12:03:39.052094 containerd[1827]: time="2026-04-21T12:03:39.052045431Z" level=info msg="StopPodSandbox for \"60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96\"" Apr 21 12:03:39.052800 containerd[1827]: time="2026-04-21T12:03:39.052679245Z" level=info msg="Ensure that sandbox 60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96 in task-service has been cleanup successfully" Apr 21 12:03:39.056370 kubelet[3337]: I0421 12:03:39.056107 3337 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d" Apr 21 12:03:39.064954 containerd[1827]: time="2026-04-21T12:03:39.064312706Z" level=info msg="StopPodSandbox for \"856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d\"" Apr 21 12:03:39.064954 containerd[1827]: time="2026-04-21T12:03:39.064627913Z" level=info msg="Ensure that sandbox 856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d in task-service has been cleanup successfully" Apr 21 12:03:39.101198 containerd[1827]: time="2026-04-21T12:03:39.101145732Z" level=info msg="CreateContainer within sandbox \"ac0880f8688a0b6efbe67b654a490cf532ace967ae8eb08f4aeabd9d9b7290b1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 21 12:03:39.148484 containerd[1827]: time="2026-04-21T12:03:39.148356392Z" level=error msg="Failed to destroy network for sandbox \"532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:03:39.148879 containerd[1827]: time="2026-04-21T12:03:39.148836602Z" level=error msg="encountered an error cleaning up failed sandbox \"532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:03:39.148991 containerd[1827]: time="2026-04-21T12:03:39.148909404Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d67f5f978-9429z,Uid:ee7802c0-cf49-4f20-9d8b-620165e7dc87,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:03:39.149316 kubelet[3337]: E0421 12:03:39.149191 3337 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:03:39.149316 kubelet[3337]: E0421 12:03:39.149267 3337 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6d67f5f978-9429z" Apr 21 12:03:39.149316 kubelet[3337]: E0421 12:03:39.149302 3337 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6d67f5f978-9429z" Apr 21 12:03:39.149644 kubelet[3337]: E0421 12:03:39.149375 3337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d67f5f978-9429z_calico-system(ee7802c0-cf49-4f20-9d8b-620165e7dc87)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d67f5f978-9429z_calico-system(ee7802c0-cf49-4f20-9d8b-620165e7dc87)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6d67f5f978-9429z" podUID="ee7802c0-cf49-4f20-9d8b-620165e7dc87" Apr 21 12:03:39.193368 containerd[1827]: time="2026-04-21T12:03:39.193311100Z" level=info msg="CreateContainer within sandbox \"ac0880f8688a0b6efbe67b654a490cf532ace967ae8eb08f4aeabd9d9b7290b1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5ef65f6aecec086c20945ea2d309c333a1ab4241841757fcf30bfe679d52bac1\"" Apr 21 12:03:39.195910 containerd[1827]: time="2026-04-21T12:03:39.195871358Z" level=info msg="StartContainer for \"5ef65f6aecec086c20945ea2d309c333a1ab4241841757fcf30bfe679d52bac1\"" Apr 21 12:03:39.203732 containerd[1827]: time="2026-04-21T12:03:39.203674233Z" level=error msg="StopPodSandbox for \"60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96\" failed" error="failed to destroy network for sandbox \"60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:03:39.204003 kubelet[3337]: E0421 12:03:39.203934 3337 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96" Apr 21 12:03:39.204095 kubelet[3337]: E0421 12:03:39.204006 3337 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96"} Apr 21 12:03:39.204095 kubelet[3337]: E0421 12:03:39.204075 3337 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"63866721-d13a-44fd-83f4-80f75344c988\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 12:03:39.204232 kubelet[3337]: E0421 12:03:39.204112 3337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"63866721-d13a-44fd-83f4-80f75344c988\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4vbvh" podUID="63866721-d13a-44fd-83f4-80f75344c988" Apr 21 12:03:39.209893 containerd[1827]: time="2026-04-21T12:03:39.209542564Z" level=error msg="Failed to destroy network for sandbox \"da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:03:39.210262 containerd[1827]: time="2026-04-21T12:03:39.210099777Z" level=error msg="encountered an error cleaning up failed sandbox \"da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:03:39.210461 containerd[1827]: time="2026-04-21T12:03:39.210352783Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-xldpb,Uid:bc6d433d-f787-4f7f-b79e-79db252ad0a3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:03:39.210686 kubelet[3337]: E0421 12:03:39.210594 3337 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:03:39.210686 kubelet[3337]: E0421 12:03:39.210663 3337 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-xldpb" Apr 21 12:03:39.210798 kubelet[3337]: E0421 12:03:39.210692 3337 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-xldpb" Apr 21 12:03:39.210798 kubelet[3337]: E0421 12:03:39.210755 3337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-xldpb_calico-system(bc6d433d-f787-4f7f-b79e-79db252ad0a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-xldpb_calico-system(bc6d433d-f787-4f7f-b79e-79db252ad0a3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-xldpb" podUID="bc6d433d-f787-4f7f-b79e-79db252ad0a3" Apr 21 12:03:39.233010 containerd[1827]: time="2026-04-21T12:03:39.232344876Z" level=error msg="Failed to destroy network for sandbox \"34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:03:39.233010 containerd[1827]: time="2026-04-21T12:03:39.232810786Z" level=error msg="encountered an error cleaning up failed sandbox \"34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:03:39.233010 containerd[1827]: time="2026-04-21T12:03:39.232884988Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fb546f67b-nbfw7,Uid:5ba14df3-ce7b-4843-8cfc-82850053c280,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:03:39.233318 kubelet[3337]: E0421 12:03:39.233160 3337 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:03:39.233318 kubelet[3337]: E0421 12:03:39.233229 3337 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5fb546f67b-nbfw7" Apr 21 12:03:39.233318 kubelet[3337]: E0421 12:03:39.233260 3337 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5fb546f67b-nbfw7" Apr 21 12:03:39.233469 kubelet[3337]: E0421 12:03:39.233328 3337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5fb546f67b-nbfw7_calico-system(5ba14df3-ce7b-4843-8cfc-82850053c280)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5fb546f67b-nbfw7_calico-system(5ba14df3-ce7b-4843-8cfc-82850053c280)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5fb546f67b-nbfw7" podUID="5ba14df3-ce7b-4843-8cfc-82850053c280" Apr 21 12:03:39.245806 containerd[1827]: time="2026-04-21T12:03:39.245643674Z" level=error msg="Failed to destroy network for sandbox \"fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:03:39.246518 containerd[1827]: time="2026-04-21T12:03:39.246278589Z" level=error msg="encountered an error cleaning up failed sandbox \"fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:03:39.246518 containerd[1827]: time="2026-04-21T12:03:39.246363491Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d67f5f978-xpcf7,Uid:9ddf0e0e-ea69-4d09-96e5-101c57c477a6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:03:39.247510 kubelet[3337]: E0421 12:03:39.246689 3337 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:03:39.247510 kubelet[3337]: E0421 12:03:39.246767 3337 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6d67f5f978-xpcf7" Apr 21 12:03:39.247510 kubelet[3337]: E0421 12:03:39.246798 3337 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6d67f5f978-xpcf7" Apr 21 12:03:39.247792 kubelet[3337]: E0421 12:03:39.246863 3337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d67f5f978-xpcf7_calico-system(9ddf0e0e-ea69-4d09-96e5-101c57c477a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d67f5f978-xpcf7_calico-system(9ddf0e0e-ea69-4d09-96e5-101c57c477a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6d67f5f978-xpcf7" podUID="9ddf0e0e-ea69-4d09-96e5-101c57c477a6" Apr 21 12:03:39.274883 containerd[1827]: time="2026-04-21T12:03:39.274810529Z" level=error msg="Failed to destroy network for sandbox \"30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:03:39.275240 containerd[1827]: time="2026-04-21T12:03:39.275188737Z" level=error msg="encountered an error cleaning up failed sandbox \"30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:03:39.275335 containerd[1827]: time="2026-04-21T12:03:39.275257439Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5668cfd99f-v5nbx,Uid:cc7c9377-a25d-48d2-aa0f-217d6f89092f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:03:39.276683 kubelet[3337]: E0421 12:03:39.275532 3337 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:03:39.276683 kubelet[3337]: E0421 12:03:39.275611 3337 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5668cfd99f-v5nbx" Apr 21 12:03:39.276683 kubelet[3337]: E0421 12:03:39.275643 3337 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5668cfd99f-v5nbx" Apr 21 12:03:39.276867 kubelet[3337]: E0421 12:03:39.275713 3337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5668cfd99f-v5nbx_calico-system(cc7c9377-a25d-48d2-aa0f-217d6f89092f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5668cfd99f-v5nbx_calico-system(cc7c9377-a25d-48d2-aa0f-217d6f89092f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5668cfd99f-v5nbx" podUID="cc7c9377-a25d-48d2-aa0f-217d6f89092f" Apr 21 12:03:39.282023 containerd[1827]: time="2026-04-21T12:03:39.281535980Z" level=error msg="StopPodSandbox for \"856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d\" failed" error="failed to destroy network for sandbox \"856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:03:39.282183 kubelet[3337]: E0421 12:03:39.281818 3337 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d" Apr 21 12:03:39.282183 kubelet[3337]: E0421 12:03:39.281891 3337 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d"} Apr 21 12:03:39.282183 kubelet[3337]: E0421 12:03:39.281941 3337 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"99e71ca6-4515-43f1-9773-fcdfc5a40507\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 12:03:39.282183 kubelet[3337]: E0421 12:03:39.281973 3337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"99e71ca6-4515-43f1-9773-fcdfc5a40507\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6mm7f" podUID="99e71ca6-4515-43f1-9773-fcdfc5a40507" Apr 21 12:03:39.287379 containerd[1827]: time="2026-04-21T12:03:39.287328810Z" level=error msg="Failed to destroy network for sandbox \"27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:03:39.288055 containerd[1827]: time="2026-04-21T12:03:39.287921523Z" level=error msg="encountered an error cleaning up failed sandbox \"27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:03:39.288055 containerd[1827]: time="2026-04-21T12:03:39.287991625Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nkbzx,Uid:7be57639-09c2-4414-a7be-12502cbf1c2c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:03:39.288617 kubelet[3337]: E0421 12:03:39.288550 3337 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:03:39.288905 kubelet[3337]: E0421 12:03:39.288798 3337 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nkbzx" Apr 21 12:03:39.289094 kubelet[3337]: E0421 12:03:39.288997 3337 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nkbzx" Apr 21 12:03:39.290351 kubelet[3337]: E0421 12:03:39.289207 3337 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-nkbzx_kube-system(7be57639-09c2-4414-a7be-12502cbf1c2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-nkbzx_kube-system(7be57639-09c2-4414-a7be-12502cbf1c2c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-nkbzx" podUID="7be57639-09c2-4414-a7be-12502cbf1c2c" Apr 21 12:03:39.332992 containerd[1827]: time="2026-04-21T12:03:39.332823730Z" level=info msg="StartContainer for \"5ef65f6aecec086c20945ea2d309c333a1ab4241841757fcf30bfe679d52bac1\" returns successfully" Apr 21 12:03:40.064254 kubelet[3337]: I0421 12:03:40.063980 3337 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7" Apr 21 12:03:40.064856 containerd[1827]: time="2026-04-21T12:03:40.064760816Z" level=info msg="StopPodSandbox for \"30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7\"" Apr 21 12:03:40.065244 containerd[1827]: time="2026-04-21T12:03:40.064972420Z" level=info msg="Ensure that sandbox 30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7 in task-service has been cleanup successfully" Apr 21 12:03:40.080967 kubelet[3337]: I0421 12:03:40.074748 3337 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c" Apr 21 12:03:40.087881 containerd[1827]: time="2026-04-21T12:03:40.084904209Z" level=info msg="StopPodSandbox for \"da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c\"" Apr 21 12:03:40.087881 containerd[1827]: time="2026-04-21T12:03:40.085131013Z" level=info msg="Ensure that sandbox da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c in task-service has been cleanup successfully" Apr 21 12:03:40.088819 kubelet[3337]: I0421 12:03:40.088780 3337 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849" Apr 21 12:03:40.089819 containerd[1827]: time="2026-04-21T12:03:40.089633601Z" level=info msg="StopPodSandbox for \"fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849\"" Apr 21 12:03:40.089935 containerd[1827]: time="2026-04-21T12:03:40.089855005Z" level=info msg="Ensure that sandbox fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849 in task-service has been cleanup successfully" Apr 21 12:03:40.094768 kubelet[3337]: I0421 12:03:40.094724 3337 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7" Apr 21 12:03:40.105133 containerd[1827]: time="2026-04-21T12:03:40.105087003Z" level=info msg="StopPodSandbox for \"27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7\"" Apr 21 12:03:40.105606 containerd[1827]: time="2026-04-21T12:03:40.105578212Z" level=info msg="Ensure that sandbox 27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7 in task-service has been cleanup successfully" Apr 21 12:03:40.130759 kubelet[3337]: I0421 12:03:40.130720 3337 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1" Apr 21 12:03:40.141920 containerd[1827]: time="2026-04-21T12:03:40.141859620Z" level=info msg="StopPodSandbox for \"34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1\"" Apr 21 12:03:40.142485 containerd[1827]: time="2026-04-21T12:03:40.142206327Z" level=info msg="Ensure that sandbox 34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1 in task-service has been cleanup successfully" Apr 21 12:03:40.151943 kubelet[3337]: I0421 12:03:40.151883 3337 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef" Apr 21 12:03:40.153390 containerd[1827]: time="2026-04-21T12:03:40.153001738Z" level=info msg="StopPodSandbox for \"532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef\"" Apr 21 12:03:40.155207 containerd[1827]: time="2026-04-21T12:03:40.154883474Z" level=info msg="Ensure that sandbox 532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef in task-service has been cleanup successfully" Apr 21 12:03:40.272074 kubelet[3337]: I0421 12:03:40.271373 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-cr4cj" podStartSLOduration=6.709391321 podStartE2EDuration="29.271340047s" podCreationTimestamp="2026-04-21 12:03:11 +0000 UTC" firstStartedPulling="2026-04-21 12:03:12.779758856 +0000 UTC m=+25.026672506" lastFinishedPulling="2026-04-21 12:03:35.341707582 +0000 UTC m=+47.588621232" observedRunningTime="2026-04-21 12:03:40.11980669 +0000 UTC m=+52.366720440" watchObservedRunningTime="2026-04-21 12:03:40.271340047 +0000 UTC m=+52.518253797" Apr 21 12:03:40.472221 containerd[1827]: 2026-04-21 12:03:40.274 [INFO][4592] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849" Apr 21 12:03:40.472221 containerd[1827]: 2026-04-21 12:03:40.276 [INFO][4592] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849" iface="eth0" netns="/var/run/netns/cni-24730570-138d-8f9e-48ef-6e2858f8d8d7" Apr 21 12:03:40.472221 containerd[1827]: 2026-04-21 12:03:40.276 [INFO][4592] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849" iface="eth0" netns="/var/run/netns/cni-24730570-138d-8f9e-48ef-6e2858f8d8d7" Apr 21 12:03:40.472221 containerd[1827]: 2026-04-21 12:03:40.282 [INFO][4592] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849" iface="eth0" netns="/var/run/netns/cni-24730570-138d-8f9e-48ef-6e2858f8d8d7" Apr 21 12:03:40.472221 containerd[1827]: 2026-04-21 12:03:40.282 [INFO][4592] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849" Apr 21 12:03:40.472221 containerd[1827]: 2026-04-21 12:03:40.282 [INFO][4592] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849" Apr 21 12:03:40.472221 containerd[1827]: 2026-04-21 12:03:40.431 [INFO][4644] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849" HandleID="k8s-pod-network.fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849" Workload="ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--xpcf7-eth0" Apr 21 12:03:40.472221 containerd[1827]: 2026-04-21 12:03:40.431 [INFO][4644] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:03:40.472221 containerd[1827]: 2026-04-21 12:03:40.431 [INFO][4644] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:03:40.472221 containerd[1827]: 2026-04-21 12:03:40.450 [WARNING][4644] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849" HandleID="k8s-pod-network.fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849" Workload="ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--xpcf7-eth0" Apr 21 12:03:40.472221 containerd[1827]: 2026-04-21 12:03:40.450 [INFO][4644] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849" HandleID="k8s-pod-network.fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849" Workload="ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--xpcf7-eth0" Apr 21 12:03:40.472221 containerd[1827]: 2026-04-21 12:03:40.452 [INFO][4644] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:03:40.472221 containerd[1827]: 2026-04-21 12:03:40.466 [INFO][4592] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849" Apr 21 12:03:40.475313 containerd[1827]: time="2026-04-21T12:03:40.474722216Z" level=info msg="TearDown network for sandbox \"fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849\" successfully" Apr 21 12:03:40.475313 containerd[1827]: time="2026-04-21T12:03:40.474764217Z" level=info msg="StopPodSandbox for \"fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849\" returns successfully" Apr 21 12:03:40.478740 containerd[1827]: time="2026-04-21T12:03:40.478387887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d67f5f978-xpcf7,Uid:9ddf0e0e-ea69-4d09-96e5-101c57c477a6,Namespace:calico-system,Attempt:1,}" Apr 21 12:03:40.480614 systemd[1]: run-netns-cni\x2d24730570\x2d138d\x2d8f9e\x2d48ef\x2d6e2858f8d8d7.mount: Deactivated successfully. Apr 21 12:03:40.562994 containerd[1827]: 2026-04-21 12:03:40.374 [INFO][4554] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7" Apr 21 12:03:40.562994 containerd[1827]: 2026-04-21 12:03:40.374 [INFO][4554] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7" iface="eth0" netns="/var/run/netns/cni-8c1b6f6e-a16c-40b8-9a87-41f504c28f16" Apr 21 12:03:40.562994 containerd[1827]: 2026-04-21 12:03:40.375 [INFO][4554] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7" iface="eth0" netns="/var/run/netns/cni-8c1b6f6e-a16c-40b8-9a87-41f504c28f16" Apr 21 12:03:40.562994 containerd[1827]: 2026-04-21 12:03:40.376 [INFO][4554] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7" iface="eth0" netns="/var/run/netns/cni-8c1b6f6e-a16c-40b8-9a87-41f504c28f16" Apr 21 12:03:40.562994 containerd[1827]: 2026-04-21 12:03:40.376 [INFO][4554] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7" Apr 21 12:03:40.562994 containerd[1827]: 2026-04-21 12:03:40.376 [INFO][4554] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7" Apr 21 12:03:40.562994 containerd[1827]: 2026-04-21 12:03:40.516 [INFO][4663] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7" HandleID="k8s-pod-network.30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7" Workload="ci--4081.3.7--a--fffe528a55-k8s-whisker--5668cfd99f--v5nbx-eth0" Apr 21 12:03:40.562994 containerd[1827]: 2026-04-21 12:03:40.516 [INFO][4663] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:03:40.562994 containerd[1827]: 2026-04-21 12:03:40.516 [INFO][4663] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:03:40.562994 containerd[1827]: 2026-04-21 12:03:40.530 [WARNING][4663] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7" HandleID="k8s-pod-network.30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7" Workload="ci--4081.3.7--a--fffe528a55-k8s-whisker--5668cfd99f--v5nbx-eth0" Apr 21 12:03:40.562994 containerd[1827]: 2026-04-21 12:03:40.530 [INFO][4663] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7" HandleID="k8s-pod-network.30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7" Workload="ci--4081.3.7--a--fffe528a55-k8s-whisker--5668cfd99f--v5nbx-eth0" Apr 21 12:03:40.562994 containerd[1827]: 2026-04-21 12:03:40.532 [INFO][4663] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:03:40.562994 containerd[1827]: 2026-04-21 12:03:40.549 [INFO][4554] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7" Apr 21 12:03:40.562994 containerd[1827]: time="2026-04-21T12:03:40.555751497Z" level=info msg="TearDown network for sandbox \"30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7\" successfully" Apr 21 12:03:40.562994 containerd[1827]: time="2026-04-21T12:03:40.555785298Z" level=info msg="StopPodSandbox for \"30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7\" returns successfully" Apr 21 12:03:40.575942 systemd[1]: run-netns-cni\x2d8c1b6f6e\x2da16c\x2d40b8\x2d9a87\x2d41f504c28f16.mount: Deactivated successfully. Apr 21 12:03:40.581008 containerd[1827]: 2026-04-21 12:03:40.357 [INFO][4605] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7" Apr 21 12:03:40.581008 containerd[1827]: 2026-04-21 12:03:40.358 [INFO][4605] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7" iface="eth0" netns="/var/run/netns/cni-19baa6b5-3574-b4ac-a3a5-4cb5004692c3" Apr 21 12:03:40.581008 containerd[1827]: 2026-04-21 12:03:40.358 [INFO][4605] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7" iface="eth0" netns="/var/run/netns/cni-19baa6b5-3574-b4ac-a3a5-4cb5004692c3" Apr 21 12:03:40.581008 containerd[1827]: 2026-04-21 12:03:40.359 [INFO][4605] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7" iface="eth0" netns="/var/run/netns/cni-19baa6b5-3574-b4ac-a3a5-4cb5004692c3" Apr 21 12:03:40.581008 containerd[1827]: 2026-04-21 12:03:40.359 [INFO][4605] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7" Apr 21 12:03:40.581008 containerd[1827]: 2026-04-21 12:03:40.359 [INFO][4605] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7" Apr 21 12:03:40.581008 containerd[1827]: 2026-04-21 12:03:40.538 [INFO][4656] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7" HandleID="k8s-pod-network.27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7" Workload="ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--nkbzx-eth0" Apr 21 12:03:40.581008 containerd[1827]: 2026-04-21 12:03:40.538 [INFO][4656] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:03:40.581008 containerd[1827]: 2026-04-21 12:03:40.538 [INFO][4656] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:03:40.581008 containerd[1827]: 2026-04-21 12:03:40.556 [WARNING][4656] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7" HandleID="k8s-pod-network.27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7" Workload="ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--nkbzx-eth0" Apr 21 12:03:40.581008 containerd[1827]: 2026-04-21 12:03:40.556 [INFO][4656] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7" HandleID="k8s-pod-network.27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7" Workload="ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--nkbzx-eth0" Apr 21 12:03:40.581008 containerd[1827]: 2026-04-21 12:03:40.558 [INFO][4656] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:03:40.581008 containerd[1827]: 2026-04-21 12:03:40.569 [INFO][4605] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7" Apr 21 12:03:40.586168 containerd[1827]: time="2026-04-21T12:03:40.585910286Z" level=info msg="TearDown network for sandbox \"27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7\" successfully" Apr 21 12:03:40.586168 containerd[1827]: time="2026-04-21T12:03:40.585963987Z" level=info msg="StopPodSandbox for \"27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7\" returns successfully" Apr 21 12:03:40.588813 containerd[1827]: time="2026-04-21T12:03:40.588316933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nkbzx,Uid:7be57639-09c2-4414-a7be-12502cbf1c2c,Namespace:kube-system,Attempt:1,}" Apr 21 12:03:40.595640 containerd[1827]: 2026-04-21 12:03:40.426 [INFO][4630] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef" Apr 21 12:03:40.595640 containerd[1827]: 2026-04-21 12:03:40.426 [INFO][4630] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef" iface="eth0" netns="/var/run/netns/cni-f7be80a2-75c6-07c6-3761-814efae00970" Apr 21 12:03:40.595640 containerd[1827]: 2026-04-21 12:03:40.426 [INFO][4630] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef" iface="eth0" netns="/var/run/netns/cni-f7be80a2-75c6-07c6-3761-814efae00970" Apr 21 12:03:40.595640 containerd[1827]: 2026-04-21 12:03:40.426 [INFO][4630] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef" iface="eth0" netns="/var/run/netns/cni-f7be80a2-75c6-07c6-3761-814efae00970" Apr 21 12:03:40.595640 containerd[1827]: 2026-04-21 12:03:40.426 [INFO][4630] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef" Apr 21 12:03:40.595640 containerd[1827]: 2026-04-21 12:03:40.426 [INFO][4630] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef" Apr 21 12:03:40.595640 containerd[1827]: 2026-04-21 12:03:40.535 [INFO][4678] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef" HandleID="k8s-pod-network.532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef" Workload="ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--9429z-eth0" Apr 21 12:03:40.595640 containerd[1827]: 2026-04-21 12:03:40.537 [INFO][4678] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:03:40.595640 containerd[1827]: 2026-04-21 12:03:40.558 [INFO][4678] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:03:40.595640 containerd[1827]: 2026-04-21 12:03:40.574 [WARNING][4678] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef" HandleID="k8s-pod-network.532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef" Workload="ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--9429z-eth0" Apr 21 12:03:40.595640 containerd[1827]: 2026-04-21 12:03:40.574 [INFO][4678] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef" HandleID="k8s-pod-network.532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef" Workload="ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--9429z-eth0" Apr 21 12:03:40.595640 containerd[1827]: 2026-04-21 12:03:40.580 [INFO][4678] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:03:40.595640 containerd[1827]: 2026-04-21 12:03:40.593 [INFO][4630] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef" Apr 21 12:03:40.598790 systemd[1]: run-netns-cni\x2d19baa6b5\x2d3574\x2db4ac\x2da3a5\x2d4cb5004692c3.mount: Deactivated successfully. Apr 21 12:03:40.605680 containerd[1827]: time="2026-04-21T12:03:40.602225204Z" level=info msg="TearDown network for sandbox \"532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef\" successfully" Apr 21 12:03:40.605680 containerd[1827]: time="2026-04-21T12:03:40.602272205Z" level=info msg="StopPodSandbox for \"532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef\" returns successfully" Apr 21 12:03:40.605680 containerd[1827]: time="2026-04-21T12:03:40.603987138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d67f5f978-9429z,Uid:ee7802c0-cf49-4f20-9d8b-620165e7dc87,Namespace:calico-system,Attempt:1,}" Apr 21 12:03:40.608022 systemd[1]: run-netns-cni\x2df7be80a2\x2d75c6\x2d07c6\x2d3761\x2d814efae00970.mount: Deactivated successfully. Apr 21 12:03:40.617358 containerd[1827]: 2026-04-21 12:03:40.408 [INFO][4618] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1" Apr 21 12:03:40.617358 containerd[1827]: 2026-04-21 12:03:40.408 [INFO][4618] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1" iface="eth0" netns="/var/run/netns/cni-8e1f0841-383f-b61c-e19e-39a3486f0cc0" Apr 21 12:03:40.617358 containerd[1827]: 2026-04-21 12:03:40.411 [INFO][4618] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1" iface="eth0" netns="/var/run/netns/cni-8e1f0841-383f-b61c-e19e-39a3486f0cc0" Apr 21 12:03:40.617358 containerd[1827]: 2026-04-21 12:03:40.411 [INFO][4618] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1" iface="eth0" netns="/var/run/netns/cni-8e1f0841-383f-b61c-e19e-39a3486f0cc0" Apr 21 12:03:40.617358 containerd[1827]: 2026-04-21 12:03:40.411 [INFO][4618] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1" Apr 21 12:03:40.617358 containerd[1827]: 2026-04-21 12:03:40.411 [INFO][4618] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1" Apr 21 12:03:40.617358 containerd[1827]: 2026-04-21 12:03:40.531 [INFO][4676] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1" HandleID="k8s-pod-network.34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1" Workload="ci--4081.3.7--a--fffe528a55-k8s-calico--kube--controllers--5fb546f67b--nbfw7-eth0" Apr 21 12:03:40.617358 containerd[1827]: 2026-04-21 12:03:40.541 [INFO][4676] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:03:40.617358 containerd[1827]: 2026-04-21 12:03:40.580 [INFO][4676] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:03:40.617358 containerd[1827]: 2026-04-21 12:03:40.609 [WARNING][4676] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1" HandleID="k8s-pod-network.34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1" Workload="ci--4081.3.7--a--fffe528a55-k8s-calico--kube--controllers--5fb546f67b--nbfw7-eth0" Apr 21 12:03:40.617358 containerd[1827]: 2026-04-21 12:03:40.609 [INFO][4676] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1" HandleID="k8s-pod-network.34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1" Workload="ci--4081.3.7--a--fffe528a55-k8s-calico--kube--controllers--5fb546f67b--nbfw7-eth0" Apr 21 12:03:40.617358 containerd[1827]: 2026-04-21 12:03:40.611 [INFO][4676] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:03:40.617358 containerd[1827]: 2026-04-21 12:03:40.613 [INFO][4618] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1" Apr 21 12:03:40.617358 containerd[1827]: time="2026-04-21T12:03:40.617281098Z" level=info msg="TearDown network for sandbox \"34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1\" successfully" Apr 21 12:03:40.617358 containerd[1827]: time="2026-04-21T12:03:40.617314198Z" level=info msg="StopPodSandbox for \"34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1\" returns successfully" Apr 21 12:03:40.623378 containerd[1827]: time="2026-04-21T12:03:40.622939908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fb546f67b-nbfw7,Uid:5ba14df3-ce7b-4843-8cfc-82850053c280,Namespace:calico-system,Attempt:1,}" Apr 21 12:03:40.634845 containerd[1827]: 2026-04-21 12:03:40.380 [INFO][4580] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c" Apr 21 12:03:40.634845 containerd[1827]: 2026-04-21 12:03:40.381 [INFO][4580] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c" iface="eth0" netns="/var/run/netns/cni-8f501b9d-903c-a3ee-01b8-c159b6fe8921" Apr 21 12:03:40.634845 containerd[1827]: 2026-04-21 12:03:40.383 [INFO][4580] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c" iface="eth0" netns="/var/run/netns/cni-8f501b9d-903c-a3ee-01b8-c159b6fe8921" Apr 21 12:03:40.634845 containerd[1827]: 2026-04-21 12:03:40.385 [INFO][4580] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c" iface="eth0" netns="/var/run/netns/cni-8f501b9d-903c-a3ee-01b8-c159b6fe8921" Apr 21 12:03:40.634845 containerd[1827]: 2026-04-21 12:03:40.385 [INFO][4580] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c" Apr 21 12:03:40.634845 containerd[1827]: 2026-04-21 12:03:40.385 [INFO][4580] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c" Apr 21 12:03:40.634845 containerd[1827]: 2026-04-21 12:03:40.544 [INFO][4667] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c" HandleID="k8s-pod-network.da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c" Workload="ci--4081.3.7--a--fffe528a55-k8s-goldmane--5b85766d88--xldpb-eth0" Apr 21 12:03:40.634845 containerd[1827]: 2026-04-21 12:03:40.544 [INFO][4667] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:03:40.634845 containerd[1827]: 2026-04-21 12:03:40.615 [INFO][4667] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:03:40.634845 containerd[1827]: 2026-04-21 12:03:40.628 [WARNING][4667] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c" HandleID="k8s-pod-network.da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c" Workload="ci--4081.3.7--a--fffe528a55-k8s-goldmane--5b85766d88--xldpb-eth0" Apr 21 12:03:40.634845 containerd[1827]: 2026-04-21 12:03:40.628 [INFO][4667] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c" HandleID="k8s-pod-network.da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c" Workload="ci--4081.3.7--a--fffe528a55-k8s-goldmane--5b85766d88--xldpb-eth0" Apr 21 12:03:40.634845 containerd[1827]: 2026-04-21 12:03:40.630 [INFO][4667] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:03:40.634845 containerd[1827]: 2026-04-21 12:03:40.632 [INFO][4580] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c" Apr 21 12:03:40.636078 containerd[1827]: time="2026-04-21T12:03:40.635693257Z" level=info msg="TearDown network for sandbox \"da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c\" successfully" Apr 21 12:03:40.636078 containerd[1827]: time="2026-04-21T12:03:40.635730858Z" level=info msg="StopPodSandbox for \"da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c\" returns successfully" Apr 21 12:03:40.638142 containerd[1827]: time="2026-04-21T12:03:40.638110604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-xldpb,Uid:bc6d433d-f787-4f7f-b79e-79db252ad0a3,Namespace:calico-system,Attempt:1,}" Apr 21 12:03:40.651820 kubelet[3337]: I0421 12:03:40.650983 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cc7c9377-a25d-48d2-aa0f-217d6f89092f-whisker-backend-key-pair\") pod \"cc7c9377-a25d-48d2-aa0f-217d6f89092f\" (UID: \"cc7c9377-a25d-48d2-aa0f-217d6f89092f\") " Apr 21 12:03:40.651820 kubelet[3337]: I0421 12:03:40.651041 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/cc7c9377-a25d-48d2-aa0f-217d6f89092f-nginx-config\") pod \"cc7c9377-a25d-48d2-aa0f-217d6f89092f\" (UID: \"cc7c9377-a25d-48d2-aa0f-217d6f89092f\") " Apr 21 12:03:40.651820 kubelet[3337]: I0421 12:03:40.651079 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fblcg\" (UniqueName: \"kubernetes.io/projected/cc7c9377-a25d-48d2-aa0f-217d6f89092f-kube-api-access-fblcg\") pod \"cc7c9377-a25d-48d2-aa0f-217d6f89092f\" (UID: \"cc7c9377-a25d-48d2-aa0f-217d6f89092f\") " Apr 21 12:03:40.651820 kubelet[3337]: I0421 12:03:40.651146 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc7c9377-a25d-48d2-aa0f-217d6f89092f-whisker-ca-bundle\") pod \"cc7c9377-a25d-48d2-aa0f-217d6f89092f\" (UID: \"cc7c9377-a25d-48d2-aa0f-217d6f89092f\") " Apr 21 12:03:40.651820 kubelet[3337]: I0421 12:03:40.651568 3337 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc7c9377-a25d-48d2-aa0f-217d6f89092f-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "cc7c9377-a25d-48d2-aa0f-217d6f89092f" (UID: "cc7c9377-a25d-48d2-aa0f-217d6f89092f"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 12:03:40.653122 kubelet[3337]: I0421 12:03:40.652694 3337 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc7c9377-a25d-48d2-aa0f-217d6f89092f-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "cc7c9377-a25d-48d2-aa0f-217d6f89092f" (UID: "cc7c9377-a25d-48d2-aa0f-217d6f89092f"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 12:03:40.658291 kubelet[3337]: I0421 12:03:40.658229 3337 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc7c9377-a25d-48d2-aa0f-217d6f89092f-kube-api-access-fblcg" (OuterVolumeSpecName: "kube-api-access-fblcg") pod "cc7c9377-a25d-48d2-aa0f-217d6f89092f" (UID: "cc7c9377-a25d-48d2-aa0f-217d6f89092f"). InnerVolumeSpecName "kube-api-access-fblcg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 12:03:40.661781 kubelet[3337]: I0421 12:03:40.661707 3337 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc7c9377-a25d-48d2-aa0f-217d6f89092f-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "cc7c9377-a25d-48d2-aa0f-217d6f89092f" (UID: "cc7c9377-a25d-48d2-aa0f-217d6f89092f"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 21 12:03:40.753596 kubelet[3337]: I0421 12:03:40.752352 3337 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc7c9377-a25d-48d2-aa0f-217d6f89092f-whisker-ca-bundle\") on node \"ci-4081.3.7-a-fffe528a55\" DevicePath \"\"" Apr 21 12:03:40.753596 kubelet[3337]: I0421 12:03:40.752513 3337 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cc7c9377-a25d-48d2-aa0f-217d6f89092f-whisker-backend-key-pair\") on node \"ci-4081.3.7-a-fffe528a55\" DevicePath \"\"" Apr 21 12:03:40.753596 kubelet[3337]: I0421 12:03:40.752534 3337 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/cc7c9377-a25d-48d2-aa0f-217d6f89092f-nginx-config\") on node \"ci-4081.3.7-a-fffe528a55\" DevicePath \"\"" Apr 21 12:03:40.753596 kubelet[3337]: I0421 12:03:40.752655 3337 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fblcg\" (UniqueName: \"kubernetes.io/projected/cc7c9377-a25d-48d2-aa0f-217d6f89092f-kube-api-access-fblcg\") on node \"ci-4081.3.7-a-fffe528a55\" DevicePath \"\"" Apr 21 12:03:40.824037 systemd-networkd[1402]: cali3c3c332a8f7: Link UP Apr 21 12:03:40.828127 systemd-networkd[1402]: cali3c3c332a8f7: Gained carrier Apr 21 12:03:40.915840 containerd[1827]: 2026-04-21 12:03:40.636 [ERROR][4696] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 12:03:40.915840 containerd[1827]: 2026-04-21 12:03:40.650 [INFO][4696] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--xpcf7-eth0 calico-apiserver-6d67f5f978- calico-system 9ddf0e0e-ea69-4d09-96e5-101c57c477a6 951 0 2026-04-21 12:03:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d67f5f978 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.7-a-fffe528a55 calico-apiserver-6d67f5f978-xpcf7 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali3c3c332a8f7 [] [] }} ContainerID="aa4c0a1f284055f598de534dd6046deb1ff9795a1faa1714fe3812e679520685" Namespace="calico-system" Pod="calico-apiserver-6d67f5f978-xpcf7" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--xpcf7-" Apr 21 12:03:40.915840 containerd[1827]: 2026-04-21 12:03:40.650 [INFO][4696] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aa4c0a1f284055f598de534dd6046deb1ff9795a1faa1714fe3812e679520685" Namespace="calico-system" Pod="calico-apiserver-6d67f5f978-xpcf7" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--xpcf7-eth0" Apr 21 12:03:40.915840 containerd[1827]: 2026-04-21 12:03:40.684 [INFO][4714] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aa4c0a1f284055f598de534dd6046deb1ff9795a1faa1714fe3812e679520685" HandleID="k8s-pod-network.aa4c0a1f284055f598de534dd6046deb1ff9795a1faa1714fe3812e679520685" Workload="ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--xpcf7-eth0" Apr 21 12:03:40.915840 containerd[1827]: 2026-04-21 12:03:40.692 [INFO][4714] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="aa4c0a1f284055f598de534dd6046deb1ff9795a1faa1714fe3812e679520685" HandleID="k8s-pod-network.aa4c0a1f284055f598de534dd6046deb1ff9795a1faa1714fe3812e679520685" Workload="ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--xpcf7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fbe80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.7-a-fffe528a55", "pod":"calico-apiserver-6d67f5f978-xpcf7", "timestamp":"2026-04-21 12:03:40.684563011 +0000 UTC"}, Hostname:"ci-4081.3.7-a-fffe528a55", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00063e000)} Apr 21 12:03:40.915840 containerd[1827]: 2026-04-21 12:03:40.692 [INFO][4714] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:03:40.915840 containerd[1827]: 2026-04-21 12:03:40.692 [INFO][4714] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:03:40.915840 containerd[1827]: 2026-04-21 12:03:40.692 [INFO][4714] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.7-a-fffe528a55' Apr 21 12:03:40.915840 containerd[1827]: 2026-04-21 12:03:40.695 [INFO][4714] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.aa4c0a1f284055f598de534dd6046deb1ff9795a1faa1714fe3812e679520685" host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:40.915840 containerd[1827]: 2026-04-21 12:03:40.699 [INFO][4714] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:40.915840 containerd[1827]: 2026-04-21 12:03:40.704 [INFO][4714] ipam/ipam.go 526: Trying affinity for 192.168.50.0/26 host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:40.915840 containerd[1827]: 2026-04-21 12:03:40.706 [INFO][4714] ipam/ipam.go 160: Attempting to load block cidr=192.168.50.0/26 host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:40.915840 containerd[1827]: 2026-04-21 12:03:40.708 [INFO][4714] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.50.0/26 host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:40.915840 containerd[1827]: 2026-04-21 12:03:40.708 [INFO][4714] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.50.0/26 handle="k8s-pod-network.aa4c0a1f284055f598de534dd6046deb1ff9795a1faa1714fe3812e679520685" host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:40.915840 containerd[1827]: 2026-04-21 12:03:40.710 [INFO][4714] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.aa4c0a1f284055f598de534dd6046deb1ff9795a1faa1714fe3812e679520685 Apr 21 12:03:40.915840 containerd[1827]: 2026-04-21 12:03:40.721 [INFO][4714] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.50.0/26 handle="k8s-pod-network.aa4c0a1f284055f598de534dd6046deb1ff9795a1faa1714fe3812e679520685" host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:40.915840 containerd[1827]: 2026-04-21 12:03:40.731 [INFO][4714] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.50.1/26] block=192.168.50.0/26 handle="k8s-pod-network.aa4c0a1f284055f598de534dd6046deb1ff9795a1faa1714fe3812e679520685" host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:40.915840 containerd[1827]: 2026-04-21 12:03:40.732 [INFO][4714] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.50.1/26] handle="k8s-pod-network.aa4c0a1f284055f598de534dd6046deb1ff9795a1faa1714fe3812e679520685" host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:40.915840 containerd[1827]: 2026-04-21 12:03:40.732 [INFO][4714] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:03:40.915840 containerd[1827]: 2026-04-21 12:03:40.732 [INFO][4714] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.50.1/26] IPv6=[] ContainerID="aa4c0a1f284055f598de534dd6046deb1ff9795a1faa1714fe3812e679520685" HandleID="k8s-pod-network.aa4c0a1f284055f598de534dd6046deb1ff9795a1faa1714fe3812e679520685" Workload="ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--xpcf7-eth0" Apr 21 12:03:40.921934 containerd[1827]: 2026-04-21 12:03:40.740 [INFO][4696] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aa4c0a1f284055f598de534dd6046deb1ff9795a1faa1714fe3812e679520685" Namespace="calico-system" Pod="calico-apiserver-6d67f5f978-xpcf7" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--xpcf7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--xpcf7-eth0", GenerateName:"calico-apiserver-6d67f5f978-", Namespace:"calico-system", SelfLink:"", UID:"9ddf0e0e-ea69-4d09-96e5-101c57c477a6", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d67f5f978", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-fffe528a55", ContainerID:"", Pod:"calico-apiserver-6d67f5f978-xpcf7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali3c3c332a8f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:03:40.921934 containerd[1827]: 2026-04-21 12:03:40.742 [INFO][4696] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.1/32] ContainerID="aa4c0a1f284055f598de534dd6046deb1ff9795a1faa1714fe3812e679520685" Namespace="calico-system" Pod="calico-apiserver-6d67f5f978-xpcf7" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--xpcf7-eth0" Apr 21 12:03:40.921934 containerd[1827]: 2026-04-21 12:03:40.743 [INFO][4696] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3c3c332a8f7 ContainerID="aa4c0a1f284055f598de534dd6046deb1ff9795a1faa1714fe3812e679520685" Namespace="calico-system" Pod="calico-apiserver-6d67f5f978-xpcf7" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--xpcf7-eth0" Apr 21 12:03:40.921934 containerd[1827]: 2026-04-21 12:03:40.841 [INFO][4696] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aa4c0a1f284055f598de534dd6046deb1ff9795a1faa1714fe3812e679520685" Namespace="calico-system" Pod="calico-apiserver-6d67f5f978-xpcf7" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--xpcf7-eth0" Apr 21 12:03:40.921934 containerd[1827]: 2026-04-21 12:03:40.855 [INFO][4696] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aa4c0a1f284055f598de534dd6046deb1ff9795a1faa1714fe3812e679520685" Namespace="calico-system" Pod="calico-apiserver-6d67f5f978-xpcf7" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--xpcf7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--xpcf7-eth0", GenerateName:"calico-apiserver-6d67f5f978-", Namespace:"calico-system", SelfLink:"", UID:"9ddf0e0e-ea69-4d09-96e5-101c57c477a6", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d67f5f978", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-fffe528a55", ContainerID:"aa4c0a1f284055f598de534dd6046deb1ff9795a1faa1714fe3812e679520685", Pod:"calico-apiserver-6d67f5f978-xpcf7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali3c3c332a8f7", MAC:"9e:27:8b:c1:48:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:03:40.921934 containerd[1827]: 2026-04-21 12:03:40.892 [INFO][4696] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aa4c0a1f284055f598de534dd6046deb1ff9795a1faa1714fe3812e679520685" Namespace="calico-system" Pod="calico-apiserver-6d67f5f978-xpcf7" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--xpcf7-eth0" Apr 21 12:03:41.131948 containerd[1827]: time="2026-04-21T12:03:41.128784480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 12:03:41.131948 containerd[1827]: time="2026-04-21T12:03:41.128901382Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 12:03:41.131948 containerd[1827]: time="2026-04-21T12:03:41.128934082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:03:41.131948 containerd[1827]: time="2026-04-21T12:03:41.129077485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:03:41.460532 kubelet[3337]: I0421 12:03:41.459904 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d1655fa2-a2aa-44c9-a2e2-fb07ecaa281e-whisker-backend-key-pair\") pod \"whisker-fc698ddcb-n6cm2\" (UID: \"d1655fa2-a2aa-44c9-a2e2-fb07ecaa281e\") " pod="calico-system/whisker-fc698ddcb-n6cm2" Apr 21 12:03:41.462173 kubelet[3337]: I0421 12:03:41.460940 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1655fa2-a2aa-44c9-a2e2-fb07ecaa281e-whisker-ca-bundle\") pod \"whisker-fc698ddcb-n6cm2\" (UID: \"d1655fa2-a2aa-44c9-a2e2-fb07ecaa281e\") " pod="calico-system/whisker-fc698ddcb-n6cm2" Apr 21 12:03:41.462173 kubelet[3337]: I0421 12:03:41.460991 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vnbf\" (UniqueName: \"kubernetes.io/projected/d1655fa2-a2aa-44c9-a2e2-fb07ecaa281e-kube-api-access-5vnbf\") pod \"whisker-fc698ddcb-n6cm2\" (UID: \"d1655fa2-a2aa-44c9-a2e2-fb07ecaa281e\") " pod="calico-system/whisker-fc698ddcb-n6cm2" Apr 21 12:03:41.462173 kubelet[3337]: I0421 12:03:41.461028 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/d1655fa2-a2aa-44c9-a2e2-fb07ecaa281e-nginx-config\") pod \"whisker-fc698ddcb-n6cm2\" (UID: \"d1655fa2-a2aa-44c9-a2e2-fb07ecaa281e\") " pod="calico-system/whisker-fc698ddcb-n6cm2" Apr 21 12:03:41.509609 systemd-networkd[1402]: cali9a61259e769: Link UP Apr 21 12:03:41.511831 systemd-networkd[1402]: cali9a61259e769: Gained carrier Apr 21 12:03:41.572821 systemd[1]: run-netns-cni\x2d8f501b9d\x2d903c\x2da3ee\x2d01b8\x2dc159b6fe8921.mount: Deactivated successfully. Apr 21 12:03:41.573014 systemd[1]: run-netns-cni\x2d8e1f0841\x2d383f\x2db61c\x2de19e\x2d39a3486f0cc0.mount: Deactivated successfully. Apr 21 12:03:41.573141 systemd[1]: var-lib-kubelet-pods-cc7c9377\x2da25d\x2d48d2\x2daa0f\x2d217d6f89092f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfblcg.mount: Deactivated successfully. Apr 21 12:03:41.573297 systemd[1]: var-lib-kubelet-pods-cc7c9377\x2da25d\x2d48d2\x2daa0f\x2d217d6f89092f-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 21 12:03:41.631896 containerd[1827]: 2026-04-21 12:03:40.758 [ERROR][4721] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 12:03:41.631896 containerd[1827]: 2026-04-21 12:03:40.791 [INFO][4721] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--9429z-eth0 calico-apiserver-6d67f5f978- calico-system ee7802c0-cf49-4f20-9d8b-620165e7dc87 957 0 2026-04-21 12:03:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d67f5f978 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.7-a-fffe528a55 calico-apiserver-6d67f5f978-9429z eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali9a61259e769 [] [] }} ContainerID="da9eb53dfd8db6757fe6ef606b74d1d0252567a6399b81f255ce3db94b3c503e" Namespace="calico-system" Pod="calico-apiserver-6d67f5f978-9429z" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--9429z-" Apr 21 12:03:41.631896 containerd[1827]: 2026-04-21 12:03:40.791 [INFO][4721] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="da9eb53dfd8db6757fe6ef606b74d1d0252567a6399b81f255ce3db94b3c503e" Namespace="calico-system" Pod="calico-apiserver-6d67f5f978-9429z" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--9429z-eth0" Apr 21 12:03:41.631896 containerd[1827]: 2026-04-21 12:03:41.212 [INFO][4807] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="da9eb53dfd8db6757fe6ef606b74d1d0252567a6399b81f255ce3db94b3c503e" HandleID="k8s-pod-network.da9eb53dfd8db6757fe6ef606b74d1d0252567a6399b81f255ce3db94b3c503e" Workload="ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--9429z-eth0" Apr 21 12:03:41.631896 containerd[1827]: 2026-04-21 12:03:41.282 [INFO][4807] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="da9eb53dfd8db6757fe6ef606b74d1d0252567a6399b81f255ce3db94b3c503e" HandleID="k8s-pod-network.da9eb53dfd8db6757fe6ef606b74d1d0252567a6399b81f255ce3db94b3c503e" Workload="ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--9429z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b4840), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.7-a-fffe528a55", "pod":"calico-apiserver-6d67f5f978-9429z", "timestamp":"2026-04-21 12:03:41.212397511 +0000 UTC"}, Hostname:"ci-4081.3.7-a-fffe528a55", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002f7080)} Apr 21 12:03:41.631896 containerd[1827]: 2026-04-21 12:03:41.282 [INFO][4807] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:03:41.631896 containerd[1827]: 2026-04-21 12:03:41.283 [INFO][4807] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:03:41.631896 containerd[1827]: 2026-04-21 12:03:41.283 [INFO][4807] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.7-a-fffe528a55' Apr 21 12:03:41.631896 containerd[1827]: 2026-04-21 12:03:41.300 [INFO][4807] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.da9eb53dfd8db6757fe6ef606b74d1d0252567a6399b81f255ce3db94b3c503e" host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:41.631896 containerd[1827]: 2026-04-21 12:03:41.347 [INFO][4807] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:41.631896 containerd[1827]: 2026-04-21 12:03:41.384 [INFO][4807] ipam/ipam.go 526: Trying affinity for 192.168.50.0/26 host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:41.631896 containerd[1827]: 2026-04-21 12:03:41.402 [INFO][4807] ipam/ipam.go 160: Attempting to load block cidr=192.168.50.0/26 host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:41.631896 containerd[1827]: 2026-04-21 12:03:41.432 [INFO][4807] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.50.0/26 host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:41.631896 containerd[1827]: 2026-04-21 12:03:41.432 [INFO][4807] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.50.0/26 handle="k8s-pod-network.da9eb53dfd8db6757fe6ef606b74d1d0252567a6399b81f255ce3db94b3c503e" host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:41.631896 containerd[1827]: 2026-04-21 12:03:41.440 [INFO][4807] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.da9eb53dfd8db6757fe6ef606b74d1d0252567a6399b81f255ce3db94b3c503e Apr 21 12:03:41.631896 containerd[1827]: 2026-04-21 12:03:41.466 [INFO][4807] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.50.0/26 handle="k8s-pod-network.da9eb53dfd8db6757fe6ef606b74d1d0252567a6399b81f255ce3db94b3c503e" host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:41.631896 containerd[1827]: 2026-04-21 12:03:41.477 [INFO][4807] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.50.2/26] block=192.168.50.0/26 handle="k8s-pod-network.da9eb53dfd8db6757fe6ef606b74d1d0252567a6399b81f255ce3db94b3c503e" host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:41.631896 containerd[1827]: 2026-04-21 12:03:41.477 [INFO][4807] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.50.2/26] handle="k8s-pod-network.da9eb53dfd8db6757fe6ef606b74d1d0252567a6399b81f255ce3db94b3c503e" host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:41.631896 containerd[1827]: 2026-04-21 12:03:41.477 [INFO][4807] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:03:41.631896 containerd[1827]: 2026-04-21 12:03:41.477 [INFO][4807] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.50.2/26] IPv6=[] ContainerID="da9eb53dfd8db6757fe6ef606b74d1d0252567a6399b81f255ce3db94b3c503e" HandleID="k8s-pod-network.da9eb53dfd8db6757fe6ef606b74d1d0252567a6399b81f255ce3db94b3c503e" Workload="ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--9429z-eth0" Apr 21 12:03:41.635418 containerd[1827]: 2026-04-21 12:03:41.492 [INFO][4721] cni-plugin/k8s.go 418: Populated endpoint ContainerID="da9eb53dfd8db6757fe6ef606b74d1d0252567a6399b81f255ce3db94b3c503e" Namespace="calico-system" Pod="calico-apiserver-6d67f5f978-9429z" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--9429z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--9429z-eth0", GenerateName:"calico-apiserver-6d67f5f978-", Namespace:"calico-system", SelfLink:"", UID:"ee7802c0-cf49-4f20-9d8b-620165e7dc87", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d67f5f978", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-fffe528a55", ContainerID:"", Pod:"calico-apiserver-6d67f5f978-9429z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9a61259e769", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:03:41.635418 containerd[1827]: 2026-04-21 12:03:41.492 [INFO][4721] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.2/32] ContainerID="da9eb53dfd8db6757fe6ef606b74d1d0252567a6399b81f255ce3db94b3c503e" Namespace="calico-system" Pod="calico-apiserver-6d67f5f978-9429z" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--9429z-eth0" Apr 21 12:03:41.635418 containerd[1827]: 2026-04-21 12:03:41.492 [INFO][4721] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9a61259e769 ContainerID="da9eb53dfd8db6757fe6ef606b74d1d0252567a6399b81f255ce3db94b3c503e" Namespace="calico-system" Pod="calico-apiserver-6d67f5f978-9429z" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--9429z-eth0" Apr 21 12:03:41.635418 containerd[1827]: 2026-04-21 12:03:41.514 [INFO][4721] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="da9eb53dfd8db6757fe6ef606b74d1d0252567a6399b81f255ce3db94b3c503e" Namespace="calico-system" Pod="calico-apiserver-6d67f5f978-9429z" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--9429z-eth0" Apr 21 12:03:41.635418 containerd[1827]: 2026-04-21 12:03:41.515 [INFO][4721] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="da9eb53dfd8db6757fe6ef606b74d1d0252567a6399b81f255ce3db94b3c503e" Namespace="calico-system" Pod="calico-apiserver-6d67f5f978-9429z" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--9429z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--9429z-eth0", GenerateName:"calico-apiserver-6d67f5f978-", Namespace:"calico-system", SelfLink:"", UID:"ee7802c0-cf49-4f20-9d8b-620165e7dc87", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d67f5f978", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-fffe528a55", ContainerID:"da9eb53dfd8db6757fe6ef606b74d1d0252567a6399b81f255ce3db94b3c503e", Pod:"calico-apiserver-6d67f5f978-9429z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9a61259e769", MAC:"5a:f4:3a:c6:8f:6a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:03:41.635418 containerd[1827]: 2026-04-21 12:03:41.552 [INFO][4721] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="da9eb53dfd8db6757fe6ef606b74d1d0252567a6399b81f255ce3db94b3c503e" Namespace="calico-system" Pod="calico-apiserver-6d67f5f978-9429z" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--9429z-eth0" Apr 21 12:03:41.671451 containerd[1827]: time="2026-04-21T12:03:41.671385268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-fc698ddcb-n6cm2,Uid:d1655fa2-a2aa-44c9-a2e2-fb07ecaa281e,Namespace:calico-system,Attempt:0,}" Apr 21 12:03:41.676804 containerd[1827]: time="2026-04-21T12:03:41.673293605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d67f5f978-xpcf7,Uid:9ddf0e0e-ea69-4d09-96e5-101c57c477a6,Namespace:calico-system,Attempt:1,} returns sandbox id \"aa4c0a1f284055f598de534dd6046deb1ff9795a1faa1714fe3812e679520685\"" Apr 21 12:03:41.681049 containerd[1827]: time="2026-04-21T12:03:41.681011456Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 21 12:03:41.742704 systemd-networkd[1402]: calif4b3a7edc59: Link UP Apr 21 12:03:41.753899 systemd-networkd[1402]: calif4b3a7edc59: Gained carrier Apr 21 12:03:41.755530 kernel: calico-node[4801]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 21 12:03:41.767890 containerd[1827]: time="2026-04-21T12:03:41.765531905Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 12:03:41.767890 containerd[1827]: time="2026-04-21T12:03:41.765628807Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 12:03:41.767890 containerd[1827]: time="2026-04-21T12:03:41.765649708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:03:41.767890 containerd[1827]: time="2026-04-21T12:03:41.765782610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:03:41.856056 containerd[1827]: 2026-04-21 12:03:41.008 [ERROR][4741] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 12:03:41.856056 containerd[1827]: 2026-04-21 12:03:41.040 [INFO][4741] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.7--a--fffe528a55-k8s-calico--kube--controllers--5fb546f67b--nbfw7-eth0 calico-kube-controllers-5fb546f67b- calico-system 5ba14df3-ce7b-4843-8cfc-82850053c280 956 0 2026-04-21 12:03:12 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5fb546f67b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.7-a-fffe528a55 calico-kube-controllers-5fb546f67b-nbfw7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif4b3a7edc59 [] [] }} ContainerID="18afa3b78ddd052670ae2cd4a3741af0c4b294630994fa07001ab28e56ef4143" Namespace="calico-system" Pod="calico-kube-controllers-5fb546f67b-nbfw7" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-calico--kube--controllers--5fb546f67b--nbfw7-" Apr 21 12:03:41.856056 containerd[1827]: 2026-04-21 12:03:41.042 [INFO][4741] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="18afa3b78ddd052670ae2cd4a3741af0c4b294630994fa07001ab28e56ef4143" Namespace="calico-system" Pod="calico-kube-controllers-5fb546f67b-nbfw7" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-calico--kube--controllers--5fb546f67b--nbfw7-eth0" Apr 21 12:03:41.856056 containerd[1827]: 2026-04-21 12:03:41.418 [INFO][4869] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="18afa3b78ddd052670ae2cd4a3741af0c4b294630994fa07001ab28e56ef4143" HandleID="k8s-pod-network.18afa3b78ddd052670ae2cd4a3741af0c4b294630994fa07001ab28e56ef4143" Workload="ci--4081.3.7--a--fffe528a55-k8s-calico--kube--controllers--5fb546f67b--nbfw7-eth0" Apr 21 12:03:41.856056 containerd[1827]: 2026-04-21 12:03:41.454 [INFO][4869] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="18afa3b78ddd052670ae2cd4a3741af0c4b294630994fa07001ab28e56ef4143" HandleID="k8s-pod-network.18afa3b78ddd052670ae2cd4a3741af0c4b294630994fa07001ab28e56ef4143" Workload="ci--4081.3.7--a--fffe528a55-k8s-calico--kube--controllers--5fb546f67b--nbfw7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004c9660), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.7-a-fffe528a55", "pod":"calico-kube-controllers-5fb546f67b-nbfw7", "timestamp":"2026-04-21 12:03:41.418969142 +0000 UTC"}, Hostname:"ci-4081.3.7-a-fffe528a55", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000feb00)} Apr 21 12:03:41.856056 containerd[1827]: 2026-04-21 12:03:41.454 [INFO][4869] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:03:41.856056 containerd[1827]: 2026-04-21 12:03:41.477 [INFO][4869] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:03:41.856056 containerd[1827]: 2026-04-21 12:03:41.477 [INFO][4869] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.7-a-fffe528a55' Apr 21 12:03:41.856056 containerd[1827]: 2026-04-21 12:03:41.480 [INFO][4869] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.18afa3b78ddd052670ae2cd4a3741af0c4b294630994fa07001ab28e56ef4143" host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:41.856056 containerd[1827]: 2026-04-21 12:03:41.490 [INFO][4869] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:41.856056 containerd[1827]: 2026-04-21 12:03:41.520 [INFO][4869] ipam/ipam.go 526: Trying affinity for 192.168.50.0/26 host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:41.856056 containerd[1827]: 2026-04-21 12:03:41.529 [INFO][4869] ipam/ipam.go 160: Attempting to load block cidr=192.168.50.0/26 host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:41.856056 containerd[1827]: 2026-04-21 12:03:41.538 [INFO][4869] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.50.0/26 host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:41.856056 containerd[1827]: 2026-04-21 12:03:41.539 [INFO][4869] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.50.0/26 handle="k8s-pod-network.18afa3b78ddd052670ae2cd4a3741af0c4b294630994fa07001ab28e56ef4143" host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:41.856056 containerd[1827]: 2026-04-21 12:03:41.547 [INFO][4869] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.18afa3b78ddd052670ae2cd4a3741af0c4b294630994fa07001ab28e56ef4143 Apr 21 12:03:41.856056 containerd[1827]: 2026-04-21 12:03:41.611 [INFO][4869] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.50.0/26 handle="k8s-pod-network.18afa3b78ddd052670ae2cd4a3741af0c4b294630994fa07001ab28e56ef4143" host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:41.856056 containerd[1827]: 2026-04-21 12:03:41.651 [INFO][4869] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.50.3/26] block=192.168.50.0/26 handle="k8s-pod-network.18afa3b78ddd052670ae2cd4a3741af0c4b294630994fa07001ab28e56ef4143" host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:41.856056 containerd[1827]: 2026-04-21 12:03:41.651 [INFO][4869] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.50.3/26] handle="k8s-pod-network.18afa3b78ddd052670ae2cd4a3741af0c4b294630994fa07001ab28e56ef4143" host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:41.856056 containerd[1827]: 2026-04-21 12:03:41.652 [INFO][4869] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:03:41.856056 containerd[1827]: 2026-04-21 12:03:41.652 [INFO][4869] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.50.3/26] IPv6=[] ContainerID="18afa3b78ddd052670ae2cd4a3741af0c4b294630994fa07001ab28e56ef4143" HandleID="k8s-pod-network.18afa3b78ddd052670ae2cd4a3741af0c4b294630994fa07001ab28e56ef4143" Workload="ci--4081.3.7--a--fffe528a55-k8s-calico--kube--controllers--5fb546f67b--nbfw7-eth0" Apr 21 12:03:41.857115 containerd[1827]: 2026-04-21 12:03:41.712 [INFO][4741] cni-plugin/k8s.go 418: Populated endpoint ContainerID="18afa3b78ddd052670ae2cd4a3741af0c4b294630994fa07001ab28e56ef4143" Namespace="calico-system" Pod="calico-kube-controllers-5fb546f67b-nbfw7" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-calico--kube--controllers--5fb546f67b--nbfw7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--fffe528a55-k8s-calico--kube--controllers--5fb546f67b--nbfw7-eth0", GenerateName:"calico-kube-controllers-5fb546f67b-", Namespace:"calico-system", SelfLink:"", UID:"5ba14df3-ce7b-4843-8cfc-82850053c280", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5fb546f67b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-fffe528a55", ContainerID:"", Pod:"calico-kube-controllers-5fb546f67b-nbfw7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.50.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif4b3a7edc59", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:03:41.857115 containerd[1827]: 2026-04-21 12:03:41.712 [INFO][4741] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.3/32] ContainerID="18afa3b78ddd052670ae2cd4a3741af0c4b294630994fa07001ab28e56ef4143" Namespace="calico-system" Pod="calico-kube-controllers-5fb546f67b-nbfw7" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-calico--kube--controllers--5fb546f67b--nbfw7-eth0" Apr 21 12:03:41.857115 containerd[1827]: 2026-04-21 12:03:41.712 [INFO][4741] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif4b3a7edc59 ContainerID="18afa3b78ddd052670ae2cd4a3741af0c4b294630994fa07001ab28e56ef4143" Namespace="calico-system" Pod="calico-kube-controllers-5fb546f67b-nbfw7" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-calico--kube--controllers--5fb546f67b--nbfw7-eth0" Apr 21 12:03:41.857115 containerd[1827]: 2026-04-21 12:03:41.753 [INFO][4741] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="18afa3b78ddd052670ae2cd4a3741af0c4b294630994fa07001ab28e56ef4143" Namespace="calico-system" Pod="calico-kube-controllers-5fb546f67b-nbfw7" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-calico--kube--controllers--5fb546f67b--nbfw7-eth0" Apr 21 12:03:41.857115 containerd[1827]: 2026-04-21 12:03:41.761 [INFO][4741] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="18afa3b78ddd052670ae2cd4a3741af0c4b294630994fa07001ab28e56ef4143" Namespace="calico-system" Pod="calico-kube-controllers-5fb546f67b-nbfw7" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-calico--kube--controllers--5fb546f67b--nbfw7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--fffe528a55-k8s-calico--kube--controllers--5fb546f67b--nbfw7-eth0", GenerateName:"calico-kube-controllers-5fb546f67b-", Namespace:"calico-system", SelfLink:"", UID:"5ba14df3-ce7b-4843-8cfc-82850053c280", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5fb546f67b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-fffe528a55", ContainerID:"18afa3b78ddd052670ae2cd4a3741af0c4b294630994fa07001ab28e56ef4143", Pod:"calico-kube-controllers-5fb546f67b-nbfw7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.50.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif4b3a7edc59", MAC:"c2:b1:6d:a1:a2:e9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:03:41.857115 containerd[1827]: 2026-04-21 12:03:41.848 [INFO][4741] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="18afa3b78ddd052670ae2cd4a3741af0c4b294630994fa07001ab28e56ef4143" Namespace="calico-system" Pod="calico-kube-controllers-5fb546f67b-nbfw7" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-calico--kube--controllers--5fb546f67b--nbfw7-eth0" Apr 21 12:03:41.878188 kubelet[3337]: I0421 12:03:41.878139 3337 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc7c9377-a25d-48d2-aa0f-217d6f89092f" path="/var/lib/kubelet/pods/cc7c9377-a25d-48d2-aa0f-217d6f89092f/volumes" Apr 21 12:03:41.930974 systemd-networkd[1402]: cali8b5560a6a03: Link UP Apr 21 12:03:41.934623 systemd-networkd[1402]: cali8b5560a6a03: Gained carrier Apr 21 12:03:42.004148 containerd[1827]: 2026-04-21 12:03:40.944 [ERROR][4730] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 12:03:42.004148 containerd[1827]: 2026-04-21 12:03:41.001 [INFO][4730] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--nkbzx-eth0 coredns-674b8bbfcf- kube-system 7be57639-09c2-4414-a7be-12502cbf1c2c 953 0 2026-04-21 12:02:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.7-a-fffe528a55 coredns-674b8bbfcf-nkbzx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8b5560a6a03 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ed2c6b34fd97f87a0f176d3946a8fc16511303646303d4d6187784d0c627eb32" Namespace="kube-system" Pod="coredns-674b8bbfcf-nkbzx" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--nkbzx-" Apr 21 12:03:42.004148 containerd[1827]: 2026-04-21 12:03:41.001 [INFO][4730] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ed2c6b34fd97f87a0f176d3946a8fc16511303646303d4d6187784d0c627eb32" Namespace="kube-system" Pod="coredns-674b8bbfcf-nkbzx" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--nkbzx-eth0" Apr 21 12:03:42.004148 containerd[1827]: 2026-04-21 12:03:41.419 [INFO][4849] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ed2c6b34fd97f87a0f176d3946a8fc16511303646303d4d6187784d0c627eb32" HandleID="k8s-pod-network.ed2c6b34fd97f87a0f176d3946a8fc16511303646303d4d6187784d0c627eb32" Workload="ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--nkbzx-eth0" Apr 21 12:03:42.004148 containerd[1827]: 2026-04-21 12:03:41.456 [INFO][4849] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ed2c6b34fd97f87a0f176d3946a8fc16511303646303d4d6187784d0c627eb32" HandleID="k8s-pod-network.ed2c6b34fd97f87a0f176d3946a8fc16511303646303d4d6187784d0c627eb32" Workload="ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--nkbzx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003f98b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.7-a-fffe528a55", "pod":"coredns-674b8bbfcf-nkbzx", "timestamp":"2026-04-21 12:03:41.419688156 +0000 UTC"}, Hostname:"ci-4081.3.7-a-fffe528a55", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004506e0)} Apr 21 12:03:42.004148 containerd[1827]: 2026-04-21 12:03:41.456 [INFO][4849] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:03:42.004148 containerd[1827]: 2026-04-21 12:03:41.651 [INFO][4849] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:03:42.004148 containerd[1827]: 2026-04-21 12:03:41.651 [INFO][4849] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.7-a-fffe528a55' Apr 21 12:03:42.004148 containerd[1827]: 2026-04-21 12:03:41.657 [INFO][4849] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ed2c6b34fd97f87a0f176d3946a8fc16511303646303d4d6187784d0c627eb32" host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:42.004148 containerd[1827]: 2026-04-21 12:03:41.711 [INFO][4849] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:42.004148 containerd[1827]: 2026-04-21 12:03:41.784 [INFO][4849] ipam/ipam.go 526: Trying affinity for 192.168.50.0/26 host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:42.004148 containerd[1827]: 2026-04-21 12:03:41.792 [INFO][4849] ipam/ipam.go 160: Attempting to load block cidr=192.168.50.0/26 host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:42.004148 containerd[1827]: 2026-04-21 12:03:41.805 [INFO][4849] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.50.0/26 host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:42.004148 containerd[1827]: 2026-04-21 12:03:41.805 [INFO][4849] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.50.0/26 handle="k8s-pod-network.ed2c6b34fd97f87a0f176d3946a8fc16511303646303d4d6187784d0c627eb32" host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:42.004148 containerd[1827]: 2026-04-21 12:03:41.808 [INFO][4849] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ed2c6b34fd97f87a0f176d3946a8fc16511303646303d4d6187784d0c627eb32 Apr 21 12:03:42.004148 containerd[1827]: 2026-04-21 12:03:41.855 [INFO][4849] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.50.0/26 handle="k8s-pod-network.ed2c6b34fd97f87a0f176d3946a8fc16511303646303d4d6187784d0c627eb32" host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:42.004148 containerd[1827]: 2026-04-21 12:03:41.898 [INFO][4849] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.50.4/26] block=192.168.50.0/26 handle="k8s-pod-network.ed2c6b34fd97f87a0f176d3946a8fc16511303646303d4d6187784d0c627eb32" host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:42.004148 containerd[1827]: 2026-04-21 12:03:41.898 [INFO][4849] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.50.4/26] handle="k8s-pod-network.ed2c6b34fd97f87a0f176d3946a8fc16511303646303d4d6187784d0c627eb32" host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:42.004148 containerd[1827]: 2026-04-21 12:03:41.898 [INFO][4849] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:03:42.004148 containerd[1827]: 2026-04-21 12:03:41.898 [INFO][4849] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.50.4/26] IPv6=[] ContainerID="ed2c6b34fd97f87a0f176d3946a8fc16511303646303d4d6187784d0c627eb32" HandleID="k8s-pod-network.ed2c6b34fd97f87a0f176d3946a8fc16511303646303d4d6187784d0c627eb32" Workload="ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--nkbzx-eth0" Apr 21 12:03:42.006252 containerd[1827]: 2026-04-21 12:03:41.912 [INFO][4730] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ed2c6b34fd97f87a0f176d3946a8fc16511303646303d4d6187784d0c627eb32" Namespace="kube-system" Pod="coredns-674b8bbfcf-nkbzx" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--nkbzx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--nkbzx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7be57639-09c2-4414-a7be-12502cbf1c2c", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 2, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-fffe528a55", ContainerID:"", Pod:"coredns-674b8bbfcf-nkbzx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8b5560a6a03", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:03:42.006252 containerd[1827]: 2026-04-21 12:03:41.912 [INFO][4730] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.4/32] ContainerID="ed2c6b34fd97f87a0f176d3946a8fc16511303646303d4d6187784d0c627eb32" Namespace="kube-system" Pod="coredns-674b8bbfcf-nkbzx" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--nkbzx-eth0" Apr 21 12:03:42.006252 containerd[1827]: 2026-04-21 12:03:41.912 [INFO][4730] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8b5560a6a03 ContainerID="ed2c6b34fd97f87a0f176d3946a8fc16511303646303d4d6187784d0c627eb32" Namespace="kube-system" Pod="coredns-674b8bbfcf-nkbzx" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--nkbzx-eth0" Apr 21 12:03:42.006252 containerd[1827]: 2026-04-21 12:03:41.937 [INFO][4730] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ed2c6b34fd97f87a0f176d3946a8fc16511303646303d4d6187784d0c627eb32" Namespace="kube-system" Pod="coredns-674b8bbfcf-nkbzx" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--nkbzx-eth0" Apr 21 12:03:42.006252 containerd[1827]: 2026-04-21 12:03:41.954 [INFO][4730] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ed2c6b34fd97f87a0f176d3946a8fc16511303646303d4d6187784d0c627eb32" Namespace="kube-system" Pod="coredns-674b8bbfcf-nkbzx" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--nkbzx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--nkbzx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7be57639-09c2-4414-a7be-12502cbf1c2c", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 2, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-fffe528a55", ContainerID:"ed2c6b34fd97f87a0f176d3946a8fc16511303646303d4d6187784d0c627eb32", Pod:"coredns-674b8bbfcf-nkbzx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8b5560a6a03", MAC:"1a:61:0a:67:96:4c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:03:42.006252 containerd[1827]: 2026-04-21 12:03:41.983 [INFO][4730] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ed2c6b34fd97f87a0f176d3946a8fc16511303646303d4d6187784d0c627eb32" Namespace="kube-system" Pod="coredns-674b8bbfcf-nkbzx" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--nkbzx-eth0" Apr 21 12:03:42.041765 systemd-networkd[1402]: cali8b752483dd0: Link UP Apr 21 12:03:42.044549 systemd-networkd[1402]: cali8b752483dd0: Gained carrier Apr 21 12:03:42.068049 containerd[1827]: time="2026-04-21T12:03:42.066771784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 12:03:42.068049 containerd[1827]: time="2026-04-21T12:03:42.066947787Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 12:03:42.068049 containerd[1827]: time="2026-04-21T12:03:42.067004188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:03:42.068049 containerd[1827]: time="2026-04-21T12:03:42.067815204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:03:42.105257 containerd[1827]: 2026-04-21 12:03:41.090 [ERROR][4765] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 12:03:42.105257 containerd[1827]: 2026-04-21 12:03:41.131 [INFO][4765] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.7--a--fffe528a55-k8s-goldmane--5b85766d88--xldpb-eth0 goldmane-5b85766d88- calico-system bc6d433d-f787-4f7f-b79e-79db252ad0a3 955 0 2026-04-21 12:03:10 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.7-a-fffe528a55 goldmane-5b85766d88-xldpb eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali8b752483dd0 [] [] }} ContainerID="cd7a9a5460c3c690530fef9d74684c31d08e79684e0c9163febf7f4ed99f7fc9" Namespace="calico-system" Pod="goldmane-5b85766d88-xldpb" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-goldmane--5b85766d88--xldpb-" Apr 21 12:03:42.105257 containerd[1827]: 2026-04-21 12:03:41.131 [INFO][4765] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cd7a9a5460c3c690530fef9d74684c31d08e79684e0c9163febf7f4ed99f7fc9" Namespace="calico-system" Pod="goldmane-5b85766d88-xldpb" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-goldmane--5b85766d88--xldpb-eth0" Apr 21 12:03:42.105257 containerd[1827]: 2026-04-21 12:03:41.470 [INFO][4902] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cd7a9a5460c3c690530fef9d74684c31d08e79684e0c9163febf7f4ed99f7fc9" HandleID="k8s-pod-network.cd7a9a5460c3c690530fef9d74684c31d08e79684e0c9163febf7f4ed99f7fc9" Workload="ci--4081.3.7--a--fffe528a55-k8s-goldmane--5b85766d88--xldpb-eth0" Apr 21 12:03:42.105257 containerd[1827]: 2026-04-21 12:03:41.487 [INFO][4902] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="cd7a9a5460c3c690530fef9d74684c31d08e79684e0c9163febf7f4ed99f7fc9" HandleID="k8s-pod-network.cd7a9a5460c3c690530fef9d74684c31d08e79684e0c9163febf7f4ed99f7fc9" Workload="ci--4081.3.7--a--fffe528a55-k8s-goldmane--5b85766d88--xldpb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103ea0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.7-a-fffe528a55", "pod":"goldmane-5b85766d88-xldpb", "timestamp":"2026-04-21 12:03:41.470837154 +0000 UTC"}, Hostname:"ci-4081.3.7-a-fffe528a55", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000030580)} Apr 21 12:03:42.105257 containerd[1827]: 2026-04-21 12:03:41.488 [INFO][4902] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:03:42.105257 containerd[1827]: 2026-04-21 12:03:41.901 [INFO][4902] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:03:42.105257 containerd[1827]: 2026-04-21 12:03:41.901 [INFO][4902] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.7-a-fffe528a55' Apr 21 12:03:42.105257 containerd[1827]: 2026-04-21 12:03:41.910 [INFO][4902] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.cd7a9a5460c3c690530fef9d74684c31d08e79684e0c9163febf7f4ed99f7fc9" host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:42.105257 containerd[1827]: 2026-04-21 12:03:41.938 [INFO][4902] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:42.105257 containerd[1827]: 2026-04-21 12:03:41.952 [INFO][4902] ipam/ipam.go 526: Trying affinity for 192.168.50.0/26 host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:42.105257 containerd[1827]: 2026-04-21 12:03:41.958 [INFO][4902] ipam/ipam.go 160: Attempting to load block cidr=192.168.50.0/26 host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:42.105257 containerd[1827]: 2026-04-21 12:03:41.965 [INFO][4902] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.50.0/26 host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:42.105257 containerd[1827]: 2026-04-21 12:03:41.965 [INFO][4902] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.50.0/26 handle="k8s-pod-network.cd7a9a5460c3c690530fef9d74684c31d08e79684e0c9163febf7f4ed99f7fc9" host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:42.105257 containerd[1827]: 2026-04-21 12:03:41.971 [INFO][4902] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.cd7a9a5460c3c690530fef9d74684c31d08e79684e0c9163febf7f4ed99f7fc9 Apr 21 12:03:42.105257 containerd[1827]: 2026-04-21 12:03:41.987 [INFO][4902] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.50.0/26 handle="k8s-pod-network.cd7a9a5460c3c690530fef9d74684c31d08e79684e0c9163febf7f4ed99f7fc9" host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:42.105257 containerd[1827]: 2026-04-21 12:03:42.008 [INFO][4902] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.50.5/26] block=192.168.50.0/26 handle="k8s-pod-network.cd7a9a5460c3c690530fef9d74684c31d08e79684e0c9163febf7f4ed99f7fc9" host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:42.105257 containerd[1827]: 2026-04-21 12:03:42.008 [INFO][4902] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.50.5/26] handle="k8s-pod-network.cd7a9a5460c3c690530fef9d74684c31d08e79684e0c9163febf7f4ed99f7fc9" host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:42.105257 containerd[1827]: 2026-04-21 12:03:42.009 [INFO][4902] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:03:42.105257 containerd[1827]: 2026-04-21 12:03:42.009 [INFO][4902] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.50.5/26] IPv6=[] ContainerID="cd7a9a5460c3c690530fef9d74684c31d08e79684e0c9163febf7f4ed99f7fc9" HandleID="k8s-pod-network.cd7a9a5460c3c690530fef9d74684c31d08e79684e0c9163febf7f4ed99f7fc9" Workload="ci--4081.3.7--a--fffe528a55-k8s-goldmane--5b85766d88--xldpb-eth0" Apr 21 12:03:42.110413 containerd[1827]: 2026-04-21 12:03:42.015 [INFO][4765] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cd7a9a5460c3c690530fef9d74684c31d08e79684e0c9163febf7f4ed99f7fc9" Namespace="calico-system" Pod="goldmane-5b85766d88-xldpb" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-goldmane--5b85766d88--xldpb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--fffe528a55-k8s-goldmane--5b85766d88--xldpb-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"bc6d433d-f787-4f7f-b79e-79db252ad0a3", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-fffe528a55", ContainerID:"", Pod:"goldmane-5b85766d88-xldpb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.50.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8b752483dd0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:03:42.110413 containerd[1827]: 2026-04-21 12:03:42.015 [INFO][4765] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.5/32] ContainerID="cd7a9a5460c3c690530fef9d74684c31d08e79684e0c9163febf7f4ed99f7fc9" Namespace="calico-system" Pod="goldmane-5b85766d88-xldpb" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-goldmane--5b85766d88--xldpb-eth0" Apr 21 12:03:42.110413 containerd[1827]: 2026-04-21 12:03:42.019 [INFO][4765] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8b752483dd0 ContainerID="cd7a9a5460c3c690530fef9d74684c31d08e79684e0c9163febf7f4ed99f7fc9" Namespace="calico-system" Pod="goldmane-5b85766d88-xldpb" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-goldmane--5b85766d88--xldpb-eth0" Apr 21 12:03:42.110413 containerd[1827]: 2026-04-21 12:03:42.049 [INFO][4765] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cd7a9a5460c3c690530fef9d74684c31d08e79684e0c9163febf7f4ed99f7fc9" Namespace="calico-system" Pod="goldmane-5b85766d88-xldpb" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-goldmane--5b85766d88--xldpb-eth0" Apr 21 12:03:42.110413 containerd[1827]: 2026-04-21 12:03:42.055 [INFO][4765] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cd7a9a5460c3c690530fef9d74684c31d08e79684e0c9163febf7f4ed99f7fc9" Namespace="calico-system" Pod="goldmane-5b85766d88-xldpb" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-goldmane--5b85766d88--xldpb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--fffe528a55-k8s-goldmane--5b85766d88--xldpb-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"bc6d433d-f787-4f7f-b79e-79db252ad0a3", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-fffe528a55", ContainerID:"cd7a9a5460c3c690530fef9d74684c31d08e79684e0c9163febf7f4ed99f7fc9", Pod:"goldmane-5b85766d88-xldpb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.50.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8b752483dd0", MAC:"9e:8c:2d:c3:ce:b7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:03:42.110413 containerd[1827]: 2026-04-21 12:03:42.095 [INFO][4765] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cd7a9a5460c3c690530fef9d74684c31d08e79684e0c9163febf7f4ed99f7fc9" Namespace="calico-system" Pod="goldmane-5b85766d88-xldpb" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-goldmane--5b85766d88--xldpb-eth0" Apr 21 12:03:42.188701 systemd-networkd[1402]: cali3c3c332a8f7: Gained IPv6LL Apr 21 12:03:42.258611 containerd[1827]: time="2026-04-21T12:03:42.255973876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 12:03:42.258611 containerd[1827]: time="2026-04-21T12:03:42.256059078Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 12:03:42.258611 containerd[1827]: time="2026-04-21T12:03:42.256096078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:03:42.258611 containerd[1827]: time="2026-04-21T12:03:42.256227181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:03:42.287048 containerd[1827]: time="2026-04-21T12:03:42.286672575Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 12:03:42.287048 containerd[1827]: time="2026-04-21T12:03:42.286758377Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 12:03:42.287048 containerd[1827]: time="2026-04-21T12:03:42.286813478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:03:42.287567 containerd[1827]: time="2026-04-21T12:03:42.287158385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:03:42.382926 containerd[1827]: time="2026-04-21T12:03:42.380852613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d67f5f978-9429z,Uid:ee7802c0-cf49-4f20-9d8b-620165e7dc87,Namespace:calico-system,Attempt:1,} returns sandbox id \"da9eb53dfd8db6757fe6ef606b74d1d0252567a6399b81f255ce3db94b3c503e\"" Apr 21 12:03:42.480777 systemd-networkd[1402]: calif458ed820e0: Link UP Apr 21 12:03:42.481054 systemd-networkd[1402]: calif458ed820e0: Gained carrier Apr 21 12:03:42.545884 containerd[1827]: 2026-04-21 12:03:42.111 [INFO][5006] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.7--a--fffe528a55-k8s-whisker--fc698ddcb--n6cm2-eth0 whisker-fc698ddcb- calico-system d1655fa2-a2aa-44c9-a2e2-fb07ecaa281e 976 0 2026-04-21 12:03:41 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:fc698ddcb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.7-a-fffe528a55 whisker-fc698ddcb-n6cm2 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calif458ed820e0 [] [] }} ContainerID="7e38b7cdf75dd20b0ccb323cbf9aeb3ea30d8b1e580bc68e83ed95a269ce9af4" Namespace="calico-system" Pod="whisker-fc698ddcb-n6cm2" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-whisker--fc698ddcb--n6cm2-" Apr 21 12:03:42.545884 containerd[1827]: 2026-04-21 12:03:42.111 [INFO][5006] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7e38b7cdf75dd20b0ccb323cbf9aeb3ea30d8b1e580bc68e83ed95a269ce9af4" Namespace="calico-system" Pod="whisker-fc698ddcb-n6cm2" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-whisker--fc698ddcb--n6cm2-eth0" Apr 21 12:03:42.545884 containerd[1827]: 2026-04-21 12:03:42.320 [INFO][5087] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7e38b7cdf75dd20b0ccb323cbf9aeb3ea30d8b1e580bc68e83ed95a269ce9af4" HandleID="k8s-pod-network.7e38b7cdf75dd20b0ccb323cbf9aeb3ea30d8b1e580bc68e83ed95a269ce9af4" Workload="ci--4081.3.7--a--fffe528a55-k8s-whisker--fc698ddcb--n6cm2-eth0" Apr 21 12:03:42.545884 containerd[1827]: 2026-04-21 12:03:42.332 [INFO][5087] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7e38b7cdf75dd20b0ccb323cbf9aeb3ea30d8b1e580bc68e83ed95a269ce9af4" HandleID="k8s-pod-network.7e38b7cdf75dd20b0ccb323cbf9aeb3ea30d8b1e580bc68e83ed95a269ce9af4" Workload="ci--4081.3.7--a--fffe528a55-k8s-whisker--fc698ddcb--n6cm2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003636d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.7-a-fffe528a55", "pod":"whisker-fc698ddcb-n6cm2", "timestamp":"2026-04-21 12:03:42.320706739 +0000 UTC"}, Hostname:"ci-4081.3.7-a-fffe528a55", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004a4dc0)} Apr 21 12:03:42.545884 containerd[1827]: 2026-04-21 12:03:42.333 [INFO][5087] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:03:42.545884 containerd[1827]: 2026-04-21 12:03:42.333 [INFO][5087] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:03:42.545884 containerd[1827]: 2026-04-21 12:03:42.333 [INFO][5087] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.7-a-fffe528a55' Apr 21 12:03:42.545884 containerd[1827]: 2026-04-21 12:03:42.336 [INFO][5087] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7e38b7cdf75dd20b0ccb323cbf9aeb3ea30d8b1e580bc68e83ed95a269ce9af4" host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:42.545884 containerd[1827]: 2026-04-21 12:03:42.346 [INFO][5087] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:42.545884 containerd[1827]: 2026-04-21 12:03:42.374 [INFO][5087] ipam/ipam.go 526: Trying affinity for 192.168.50.0/26 host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:42.545884 containerd[1827]: 2026-04-21 12:03:42.386 [INFO][5087] ipam/ipam.go 160: Attempting to load block cidr=192.168.50.0/26 host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:42.545884 containerd[1827]: 2026-04-21 12:03:42.391 [INFO][5087] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.50.0/26 host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:42.545884 containerd[1827]: 2026-04-21 12:03:42.391 [INFO][5087] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.50.0/26 handle="k8s-pod-network.7e38b7cdf75dd20b0ccb323cbf9aeb3ea30d8b1e580bc68e83ed95a269ce9af4" host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:42.545884 containerd[1827]: 2026-04-21 12:03:42.410 [INFO][5087] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7e38b7cdf75dd20b0ccb323cbf9aeb3ea30d8b1e580bc68e83ed95a269ce9af4 Apr 21 12:03:42.545884 containerd[1827]: 2026-04-21 12:03:42.424 [INFO][5087] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.50.0/26 handle="k8s-pod-network.7e38b7cdf75dd20b0ccb323cbf9aeb3ea30d8b1e580bc68e83ed95a269ce9af4" host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:42.545884 containerd[1827]: 2026-04-21 12:03:42.443 [INFO][5087] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.50.6/26] block=192.168.50.0/26 handle="k8s-pod-network.7e38b7cdf75dd20b0ccb323cbf9aeb3ea30d8b1e580bc68e83ed95a269ce9af4" host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:42.545884 containerd[1827]: 2026-04-21 12:03:42.443 [INFO][5087] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.50.6/26] handle="k8s-pod-network.7e38b7cdf75dd20b0ccb323cbf9aeb3ea30d8b1e580bc68e83ed95a269ce9af4" host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:42.545884 containerd[1827]: 2026-04-21 12:03:42.443 [INFO][5087] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:03:42.545884 containerd[1827]: 2026-04-21 12:03:42.443 [INFO][5087] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.50.6/26] IPv6=[] ContainerID="7e38b7cdf75dd20b0ccb323cbf9aeb3ea30d8b1e580bc68e83ed95a269ce9af4" HandleID="k8s-pod-network.7e38b7cdf75dd20b0ccb323cbf9aeb3ea30d8b1e580bc68e83ed95a269ce9af4" Workload="ci--4081.3.7--a--fffe528a55-k8s-whisker--fc698ddcb--n6cm2-eth0" Apr 21 12:03:42.547175 containerd[1827]: 2026-04-21 12:03:42.455 [INFO][5006] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7e38b7cdf75dd20b0ccb323cbf9aeb3ea30d8b1e580bc68e83ed95a269ce9af4" Namespace="calico-system" Pod="whisker-fc698ddcb-n6cm2" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-whisker--fc698ddcb--n6cm2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--fffe528a55-k8s-whisker--fc698ddcb--n6cm2-eth0", GenerateName:"whisker-fc698ddcb-", Namespace:"calico-system", SelfLink:"", UID:"d1655fa2-a2aa-44c9-a2e2-fb07ecaa281e", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"fc698ddcb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-fffe528a55", ContainerID:"", Pod:"whisker-fc698ddcb-n6cm2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.50.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif458ed820e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:03:42.547175 containerd[1827]: 2026-04-21 12:03:42.455 [INFO][5006] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.6/32] ContainerID="7e38b7cdf75dd20b0ccb323cbf9aeb3ea30d8b1e580bc68e83ed95a269ce9af4" Namespace="calico-system" Pod="whisker-fc698ddcb-n6cm2" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-whisker--fc698ddcb--n6cm2-eth0" Apr 21 12:03:42.547175 containerd[1827]: 2026-04-21 12:03:42.456 [INFO][5006] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif458ed820e0 ContainerID="7e38b7cdf75dd20b0ccb323cbf9aeb3ea30d8b1e580bc68e83ed95a269ce9af4" Namespace="calico-system" Pod="whisker-fc698ddcb-n6cm2" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-whisker--fc698ddcb--n6cm2-eth0" Apr 21 12:03:42.547175 containerd[1827]: 2026-04-21 12:03:42.482 [INFO][5006] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7e38b7cdf75dd20b0ccb323cbf9aeb3ea30d8b1e580bc68e83ed95a269ce9af4" Namespace="calico-system" Pod="whisker-fc698ddcb-n6cm2" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-whisker--fc698ddcb--n6cm2-eth0" Apr 21 12:03:42.547175 containerd[1827]: 2026-04-21 12:03:42.482 [INFO][5006] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7e38b7cdf75dd20b0ccb323cbf9aeb3ea30d8b1e580bc68e83ed95a269ce9af4" Namespace="calico-system" Pod="whisker-fc698ddcb-n6cm2" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-whisker--fc698ddcb--n6cm2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--fffe528a55-k8s-whisker--fc698ddcb--n6cm2-eth0", GenerateName:"whisker-fc698ddcb-", Namespace:"calico-system", SelfLink:"", UID:"d1655fa2-a2aa-44c9-a2e2-fb07ecaa281e", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"fc698ddcb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-fffe528a55", ContainerID:"7e38b7cdf75dd20b0ccb323cbf9aeb3ea30d8b1e580bc68e83ed95a269ce9af4", Pod:"whisker-fc698ddcb-n6cm2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.50.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif458ed820e0", MAC:"96:20:de:56:40:f3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:03:42.547175 containerd[1827]: 2026-04-21 12:03:42.518 [INFO][5006] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7e38b7cdf75dd20b0ccb323cbf9aeb3ea30d8b1e580bc68e83ed95a269ce9af4" Namespace="calico-system" Pod="whisker-fc698ddcb-n6cm2" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-whisker--fc698ddcb--n6cm2-eth0" Apr 21 12:03:42.620038 containerd[1827]: time="2026-04-21T12:03:42.619991980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nkbzx,Uid:7be57639-09c2-4414-a7be-12502cbf1c2c,Namespace:kube-system,Attempt:1,} returns sandbox id \"ed2c6b34fd97f87a0f176d3946a8fc16511303646303d4d6187784d0c627eb32\"" Apr 21 12:03:42.660292 containerd[1827]: time="2026-04-21T12:03:42.660232565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fb546f67b-nbfw7,Uid:5ba14df3-ce7b-4843-8cfc-82850053c280,Namespace:calico-system,Attempt:1,} returns sandbox id \"18afa3b78ddd052670ae2cd4a3741af0c4b294630994fa07001ab28e56ef4143\"" Apr 21 12:03:42.678141 containerd[1827]: time="2026-04-21T12:03:42.678084513Z" level=info msg="CreateContainer within sandbox \"ed2c6b34fd97f87a0f176d3946a8fc16511303646303d4d6187784d0c627eb32\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 12:03:42.696656 containerd[1827]: time="2026-04-21T12:03:42.696192767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-xldpb,Uid:bc6d433d-f787-4f7f-b79e-79db252ad0a3,Namespace:calico-system,Attempt:1,} returns sandbox id \"cd7a9a5460c3c690530fef9d74684c31d08e79684e0c9163febf7f4ed99f7fc9\"" Apr 21 12:03:42.713781 containerd[1827]: time="2026-04-21T12:03:42.699116124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 12:03:42.713781 containerd[1827]: time="2026-04-21T12:03:42.699182425Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 12:03:42.713781 containerd[1827]: time="2026-04-21T12:03:42.699205725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:03:42.713781 containerd[1827]: time="2026-04-21T12:03:42.699304327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:03:42.754347 systemd[1]: run-containerd-runc-k8s.io-7e38b7cdf75dd20b0ccb323cbf9aeb3ea30d8b1e580bc68e83ed95a269ce9af4-runc.LpO5f6.mount: Deactivated successfully. Apr 21 12:03:42.767391 systemd-networkd[1402]: cali9a61259e769: Gained IPv6LL Apr 21 12:03:42.784318 containerd[1827]: time="2026-04-21T12:03:42.783778076Z" level=info msg="CreateContainer within sandbox \"ed2c6b34fd97f87a0f176d3946a8fc16511303646303d4d6187784d0c627eb32\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d2eb04f3d418f03b9a64a6bc291cbc8d78613e37a2b550b0877381dd200537e1\"" Apr 21 12:03:42.787554 containerd[1827]: time="2026-04-21T12:03:42.787372446Z" level=info msg="StartContainer for \"d2eb04f3d418f03b9a64a6bc291cbc8d78613e37a2b550b0877381dd200537e1\"" Apr 21 12:03:42.937561 containerd[1827]: time="2026-04-21T12:03:42.935112129Z" level=info msg="StartContainer for \"d2eb04f3d418f03b9a64a6bc291cbc8d78613e37a2b550b0877381dd200537e1\" returns successfully" Apr 21 12:03:42.943108 containerd[1827]: time="2026-04-21T12:03:42.943045784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-fc698ddcb-n6cm2,Uid:d1655fa2-a2aa-44c9-a2e2-fb07ecaa281e,Namespace:calico-system,Attempt:0,} returns sandbox id \"7e38b7cdf75dd20b0ccb323cbf9aeb3ea30d8b1e580bc68e83ed95a269ce9af4\"" Apr 21 12:03:43.208747 kubelet[3337]: I0421 12:03:43.207013 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-nkbzx" podStartSLOduration=49.206988235 podStartE2EDuration="49.206988235s" podCreationTimestamp="2026-04-21 12:02:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 12:03:43.205184099 +0000 UTC m=+55.452097849" watchObservedRunningTime="2026-04-21 12:03:43.206988235 +0000 UTC m=+55.453901985" Apr 21 12:03:43.310305 systemd-networkd[1402]: vxlan.calico: Link UP Apr 21 12:03:43.310317 systemd-networkd[1402]: vxlan.calico: Gained carrier Apr 21 12:03:43.404899 systemd-networkd[1402]: cali8b5560a6a03: Gained IPv6LL Apr 21 12:03:43.468745 systemd-networkd[1402]: cali8b752483dd0: Gained IPv6LL Apr 21 12:03:43.533599 systemd-networkd[1402]: calif4b3a7edc59: Gained IPv6LL Apr 21 12:03:44.236996 systemd-networkd[1402]: calif458ed820e0: Gained IPv6LL Apr 21 12:03:44.749010 systemd-networkd[1402]: vxlan.calico: Gained IPv6LL Apr 21 12:03:45.431946 containerd[1827]: time="2026-04-21T12:03:45.431890101Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:03:45.434912 containerd[1827]: time="2026-04-21T12:03:45.434842669Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 21 12:03:45.438445 containerd[1827]: time="2026-04-21T12:03:45.438368350Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:03:45.443217 containerd[1827]: time="2026-04-21T12:03:45.443148560Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:03:45.444097 containerd[1827]: time="2026-04-21T12:03:45.443958079Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 3.76226051s" Apr 21 12:03:45.444097 containerd[1827]: time="2026-04-21T12:03:45.443997880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 21 12:03:45.445976 containerd[1827]: time="2026-04-21T12:03:45.445352911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 21 12:03:45.452046 containerd[1827]: time="2026-04-21T12:03:45.452015464Z" level=info msg="CreateContainer within sandbox \"aa4c0a1f284055f598de534dd6046deb1ff9795a1faa1714fe3812e679520685\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 21 12:03:45.492285 containerd[1827]: time="2026-04-21T12:03:45.492223890Z" level=info msg="CreateContainer within sandbox \"aa4c0a1f284055f598de534dd6046deb1ff9795a1faa1714fe3812e679520685\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"36e29fd5b0a26e4eb45df56dcb0739002a4c1bc8a1db0e4ec205de4938e4909e\"" Apr 21 12:03:45.493437 containerd[1827]: time="2026-04-21T12:03:45.493396317Z" level=info msg="StartContainer for \"36e29fd5b0a26e4eb45df56dcb0739002a4c1bc8a1db0e4ec205de4938e4909e\"" Apr 21 12:03:45.582366 containerd[1827]: time="2026-04-21T12:03:45.582307162Z" level=info msg="StartContainer for \"36e29fd5b0a26e4eb45df56dcb0739002a4c1bc8a1db0e4ec205de4938e4909e\" returns successfully" Apr 21 12:03:45.779338 containerd[1827]: time="2026-04-21T12:03:45.778140569Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:03:45.781787 containerd[1827]: time="2026-04-21T12:03:45.781733651Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 21 12:03:45.784227 containerd[1827]: time="2026-04-21T12:03:45.784184608Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 338.798695ms" Apr 21 12:03:45.784365 containerd[1827]: time="2026-04-21T12:03:45.784347511Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 21 12:03:45.787903 containerd[1827]: time="2026-04-21T12:03:45.787109475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 21 12:03:45.796109 containerd[1827]: time="2026-04-21T12:03:45.795944778Z" level=info msg="CreateContainer within sandbox \"da9eb53dfd8db6757fe6ef606b74d1d0252567a6399b81f255ce3db94b3c503e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 21 12:03:45.835319 containerd[1827]: time="2026-04-21T12:03:45.835262683Z" level=info msg="CreateContainer within sandbox \"da9eb53dfd8db6757fe6ef606b74d1d0252567a6399b81f255ce3db94b3c503e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2202f6757109f83ba089e107eb284c4092ae1b75022bc2efe4be86c9918338e6\"" Apr 21 12:03:45.837519 containerd[1827]: time="2026-04-21T12:03:45.837129426Z" level=info msg="StartContainer for \"2202f6757109f83ba089e107eb284c4092ae1b75022bc2efe4be86c9918338e6\"" Apr 21 12:03:45.956250 containerd[1827]: time="2026-04-21T12:03:45.956196366Z" level=info msg="StartContainer for \"2202f6757109f83ba089e107eb284c4092ae1b75022bc2efe4be86c9918338e6\" returns successfully" Apr 21 12:03:46.234909 kubelet[3337]: I0421 12:03:46.232786 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6d67f5f978-xpcf7" podStartSLOduration=33.468277072 podStartE2EDuration="37.23276023s" podCreationTimestamp="2026-04-21 12:03:09 +0000 UTC" firstStartedPulling="2026-04-21 12:03:41.680680349 +0000 UTC m=+53.927593999" lastFinishedPulling="2026-04-21 12:03:45.445163407 +0000 UTC m=+57.692077157" observedRunningTime="2026-04-21 12:03:46.229370852 +0000 UTC m=+58.476284502" watchObservedRunningTime="2026-04-21 12:03:46.23276023 +0000 UTC m=+58.479673980" Apr 21 12:03:46.270038 kubelet[3337]: I0421 12:03:46.267162 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6d67f5f978-9429z" podStartSLOduration=33.869444232 podStartE2EDuration="37.267136221s" podCreationTimestamp="2026-04-21 12:03:09 +0000 UTC" firstStartedPulling="2026-04-21 12:03:42.387591244 +0000 UTC m=+54.634504894" lastFinishedPulling="2026-04-21 12:03:45.785283133 +0000 UTC m=+58.032196883" observedRunningTime="2026-04-21 12:03:46.266704411 +0000 UTC m=+58.513618161" watchObservedRunningTime="2026-04-21 12:03:46.267136221 +0000 UTC m=+58.514049971" Apr 21 12:03:47.225102 kubelet[3337]: I0421 12:03:47.224527 3337 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 12:03:47.225102 kubelet[3337]: I0421 12:03:47.224663 3337 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 12:03:47.867171 containerd[1827]: time="2026-04-21T12:03:47.867126310Z" level=info msg="StopPodSandbox for \"30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7\"" Apr 21 12:03:47.967899 containerd[1827]: 2026-04-21 12:03:47.914 [WARNING][5491] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-whisker--5668cfd99f--v5nbx-eth0" Apr 21 12:03:47.967899 containerd[1827]: 2026-04-21 12:03:47.914 [INFO][5491] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7" Apr 21 12:03:47.967899 containerd[1827]: 2026-04-21 12:03:47.914 [INFO][5491] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7" iface="eth0" netns="" Apr 21 12:03:47.967899 containerd[1827]: 2026-04-21 12:03:47.914 [INFO][5491] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7" Apr 21 12:03:47.967899 containerd[1827]: 2026-04-21 12:03:47.914 [INFO][5491] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7" Apr 21 12:03:47.967899 containerd[1827]: 2026-04-21 12:03:47.957 [INFO][5498] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7" HandleID="k8s-pod-network.30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7" Workload="ci--4081.3.7--a--fffe528a55-k8s-whisker--5668cfd99f--v5nbx-eth0" Apr 21 12:03:47.967899 containerd[1827]: 2026-04-21 12:03:47.957 [INFO][5498] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:03:47.967899 containerd[1827]: 2026-04-21 12:03:47.957 [INFO][5498] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:03:47.967899 containerd[1827]: 2026-04-21 12:03:47.964 [WARNING][5498] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7" HandleID="k8s-pod-network.30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7" Workload="ci--4081.3.7--a--fffe528a55-k8s-whisker--5668cfd99f--v5nbx-eth0" Apr 21 12:03:47.967899 containerd[1827]: 2026-04-21 12:03:47.964 [INFO][5498] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7" HandleID="k8s-pod-network.30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7" Workload="ci--4081.3.7--a--fffe528a55-k8s-whisker--5668cfd99f--v5nbx-eth0" Apr 21 12:03:47.967899 containerd[1827]: 2026-04-21 12:03:47.965 [INFO][5498] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:03:47.967899 containerd[1827]: 2026-04-21 12:03:47.966 [INFO][5491] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7" Apr 21 12:03:47.968437 containerd[1827]: time="2026-04-21T12:03:47.967971005Z" level=info msg="TearDown network for sandbox \"30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7\" successfully" Apr 21 12:03:47.968437 containerd[1827]: time="2026-04-21T12:03:47.968017606Z" level=info msg="StopPodSandbox for \"30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7\" returns successfully" Apr 21 12:03:47.968992 containerd[1827]: time="2026-04-21T12:03:47.968962727Z" level=info msg="RemovePodSandbox for \"30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7\"" Apr 21 12:03:47.969285 containerd[1827]: time="2026-04-21T12:03:47.969151332Z" level=info msg="Forcibly stopping sandbox \"30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7\"" Apr 21 12:03:48.045998 containerd[1827]: 2026-04-21 12:03:48.008 [WARNING][5513] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-whisker--5668cfd99f--v5nbx-eth0" Apr 21 12:03:48.045998 containerd[1827]: 2026-04-21 12:03:48.008 [INFO][5513] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7" Apr 21 12:03:48.045998 containerd[1827]: 2026-04-21 12:03:48.008 [INFO][5513] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7" iface="eth0" netns="" Apr 21 12:03:48.045998 containerd[1827]: 2026-04-21 12:03:48.008 [INFO][5513] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7" Apr 21 12:03:48.045998 containerd[1827]: 2026-04-21 12:03:48.008 [INFO][5513] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7" Apr 21 12:03:48.045998 containerd[1827]: 2026-04-21 12:03:48.032 [INFO][5520] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7" HandleID="k8s-pod-network.30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7" Workload="ci--4081.3.7--a--fffe528a55-k8s-whisker--5668cfd99f--v5nbx-eth0" Apr 21 12:03:48.045998 containerd[1827]: 2026-04-21 12:03:48.032 [INFO][5520] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:03:48.045998 containerd[1827]: 2026-04-21 12:03:48.032 [INFO][5520] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:03:48.045998 containerd[1827]: 2026-04-21 12:03:48.040 [WARNING][5520] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7" HandleID="k8s-pod-network.30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7" Workload="ci--4081.3.7--a--fffe528a55-k8s-whisker--5668cfd99f--v5nbx-eth0" Apr 21 12:03:48.045998 containerd[1827]: 2026-04-21 12:03:48.040 [INFO][5520] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7" HandleID="k8s-pod-network.30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7" Workload="ci--4081.3.7--a--fffe528a55-k8s-whisker--5668cfd99f--v5nbx-eth0" Apr 21 12:03:48.045998 containerd[1827]: 2026-04-21 12:03:48.042 [INFO][5520] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:03:48.045998 containerd[1827]: 2026-04-21 12:03:48.043 [INFO][5513] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7" Apr 21 12:03:48.045998 containerd[1827]: time="2026-04-21T12:03:48.044664650Z" level=info msg="TearDown network for sandbox \"30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7\" successfully" Apr 21 12:03:48.287420 containerd[1827]: time="2026-04-21T12:03:48.287276371Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 12:03:48.287420 containerd[1827]: time="2026-04-21T12:03:48.287371073Z" level=info msg="RemovePodSandbox \"30288cd13312ff1a408d7bd41b5cf675654ba1aff5a234d110d40daed4e224f7\" returns successfully" Apr 21 12:03:48.288546 containerd[1827]: time="2026-04-21T12:03:48.288379196Z" level=info msg="StopPodSandbox for \"da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c\"" Apr 21 12:03:48.402319 containerd[1827]: 2026-04-21 12:03:48.348 [WARNING][5538] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--fffe528a55-k8s-goldmane--5b85766d88--xldpb-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"bc6d433d-f787-4f7f-b79e-79db252ad0a3", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-fffe528a55", ContainerID:"cd7a9a5460c3c690530fef9d74684c31d08e79684e0c9163febf7f4ed99f7fc9", Pod:"goldmane-5b85766d88-xldpb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.50.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8b752483dd0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:03:48.402319 containerd[1827]: 2026-04-21 12:03:48.349 [INFO][5538] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c" Apr 21 12:03:48.402319 containerd[1827]: 2026-04-21 12:03:48.349 [INFO][5538] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c" iface="eth0" netns="" Apr 21 12:03:48.402319 containerd[1827]: 2026-04-21 12:03:48.350 [INFO][5538] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c" Apr 21 12:03:48.402319 containerd[1827]: 2026-04-21 12:03:48.350 [INFO][5538] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c" Apr 21 12:03:48.402319 containerd[1827]: 2026-04-21 12:03:48.387 [INFO][5546] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c" HandleID="k8s-pod-network.da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c" Workload="ci--4081.3.7--a--fffe528a55-k8s-goldmane--5b85766d88--xldpb-eth0" Apr 21 12:03:48.402319 containerd[1827]: 2026-04-21 12:03:48.387 [INFO][5546] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:03:48.402319 containerd[1827]: 2026-04-21 12:03:48.387 [INFO][5546] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:03:48.402319 containerd[1827]: 2026-04-21 12:03:48.396 [WARNING][5546] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c" HandleID="k8s-pod-network.da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c" Workload="ci--4081.3.7--a--fffe528a55-k8s-goldmane--5b85766d88--xldpb-eth0" Apr 21 12:03:48.402319 containerd[1827]: 2026-04-21 12:03:48.396 [INFO][5546] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c" HandleID="k8s-pod-network.da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c" Workload="ci--4081.3.7--a--fffe528a55-k8s-goldmane--5b85766d88--xldpb-eth0" Apr 21 12:03:48.402319 containerd[1827]: 2026-04-21 12:03:48.398 [INFO][5546] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:03:48.402319 containerd[1827]: 2026-04-21 12:03:48.400 [INFO][5538] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c" Apr 21 12:03:48.403386 containerd[1827]: time="2026-04-21T12:03:48.402369190Z" level=info msg="TearDown network for sandbox \"da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c\" successfully" Apr 21 12:03:48.403386 containerd[1827]: time="2026-04-21T12:03:48.402403090Z" level=info msg="StopPodSandbox for \"da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c\" returns successfully" Apr 21 12:03:48.403386 containerd[1827]: time="2026-04-21T12:03:48.403373212Z" level=info msg="RemovePodSandbox for \"da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c\"" Apr 21 12:03:48.403878 containerd[1827]: time="2026-04-21T12:03:48.403410913Z" level=info msg="Forcibly stopping sandbox \"da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c\"" Apr 21 12:03:48.539540 containerd[1827]: 2026-04-21 12:03:48.471 [WARNING][5561] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--fffe528a55-k8s-goldmane--5b85766d88--xldpb-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"bc6d433d-f787-4f7f-b79e-79db252ad0a3", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-fffe528a55", ContainerID:"cd7a9a5460c3c690530fef9d74684c31d08e79684e0c9163febf7f4ed99f7fc9", Pod:"goldmane-5b85766d88-xldpb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.50.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8b752483dd0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:03:48.539540 containerd[1827]: 2026-04-21 12:03:48.472 [INFO][5561] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c" Apr 21 12:03:48.539540 containerd[1827]: 2026-04-21 12:03:48.472 [INFO][5561] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c" iface="eth0" netns="" Apr 21 12:03:48.539540 containerd[1827]: 2026-04-21 12:03:48.472 [INFO][5561] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c" Apr 21 12:03:48.539540 containerd[1827]: 2026-04-21 12:03:48.472 [INFO][5561] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c" Apr 21 12:03:48.539540 containerd[1827]: 2026-04-21 12:03:48.515 [INFO][5569] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c" HandleID="k8s-pod-network.da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c" Workload="ci--4081.3.7--a--fffe528a55-k8s-goldmane--5b85766d88--xldpb-eth0" Apr 21 12:03:48.539540 containerd[1827]: 2026-04-21 12:03:48.516 [INFO][5569] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:03:48.539540 containerd[1827]: 2026-04-21 12:03:48.516 [INFO][5569] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:03:48.539540 containerd[1827]: 2026-04-21 12:03:48.530 [WARNING][5569] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c" HandleID="k8s-pod-network.da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c" Workload="ci--4081.3.7--a--fffe528a55-k8s-goldmane--5b85766d88--xldpb-eth0" Apr 21 12:03:48.539540 containerd[1827]: 2026-04-21 12:03:48.530 [INFO][5569] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c" HandleID="k8s-pod-network.da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c" Workload="ci--4081.3.7--a--fffe528a55-k8s-goldmane--5b85766d88--xldpb-eth0" Apr 21 12:03:48.539540 containerd[1827]: 2026-04-21 12:03:48.533 [INFO][5569] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:03:48.539540 containerd[1827]: 2026-04-21 12:03:48.536 [INFO][5561] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c" Apr 21 12:03:48.539540 containerd[1827]: time="2026-04-21T12:03:48.538467686Z" level=info msg="TearDown network for sandbox \"da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c\" successfully" Apr 21 12:03:48.547926 containerd[1827]: time="2026-04-21T12:03:48.547718397Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 12:03:48.547926 containerd[1827]: time="2026-04-21T12:03:48.547798899Z" level=info msg="RemovePodSandbox \"da482c0da30ed52e0a7344a261ad6403d33246275199fe19c8b1ec6264a3b67c\" returns successfully" Apr 21 12:03:48.548391 containerd[1827]: time="2026-04-21T12:03:48.548363612Z" level=info msg="StopPodSandbox for \"34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1\"" Apr 21 12:03:48.667243 containerd[1827]: 2026-04-21 12:03:48.609 [WARNING][5583] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--fffe528a55-k8s-calico--kube--controllers--5fb546f67b--nbfw7-eth0", GenerateName:"calico-kube-controllers-5fb546f67b-", Namespace:"calico-system", SelfLink:"", UID:"5ba14df3-ce7b-4843-8cfc-82850053c280", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5fb546f67b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-fffe528a55", ContainerID:"18afa3b78ddd052670ae2cd4a3741af0c4b294630994fa07001ab28e56ef4143", Pod:"calico-kube-controllers-5fb546f67b-nbfw7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.50.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif4b3a7edc59", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:03:48.667243 containerd[1827]: 2026-04-21 12:03:48.609 [INFO][5583] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1" Apr 21 12:03:48.667243 containerd[1827]: 2026-04-21 12:03:48.609 [INFO][5583] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1" iface="eth0" netns="" Apr 21 12:03:48.667243 containerd[1827]: 2026-04-21 12:03:48.609 [INFO][5583] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1" Apr 21 12:03:48.667243 containerd[1827]: 2026-04-21 12:03:48.609 [INFO][5583] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1" Apr 21 12:03:48.667243 containerd[1827]: 2026-04-21 12:03:48.651 [INFO][5590] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1" HandleID="k8s-pod-network.34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1" Workload="ci--4081.3.7--a--fffe528a55-k8s-calico--kube--controllers--5fb546f67b--nbfw7-eth0" Apr 21 12:03:48.667243 containerd[1827]: 2026-04-21 12:03:48.651 [INFO][5590] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:03:48.667243 containerd[1827]: 2026-04-21 12:03:48.651 [INFO][5590] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:03:48.667243 containerd[1827]: 2026-04-21 12:03:48.660 [WARNING][5590] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1" HandleID="k8s-pod-network.34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1" Workload="ci--4081.3.7--a--fffe528a55-k8s-calico--kube--controllers--5fb546f67b--nbfw7-eth0" Apr 21 12:03:48.667243 containerd[1827]: 2026-04-21 12:03:48.660 [INFO][5590] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1" HandleID="k8s-pod-network.34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1" Workload="ci--4081.3.7--a--fffe528a55-k8s-calico--kube--controllers--5fb546f67b--nbfw7-eth0" Apr 21 12:03:48.667243 containerd[1827]: 2026-04-21 12:03:48.662 [INFO][5590] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:03:48.667243 containerd[1827]: 2026-04-21 12:03:48.664 [INFO][5583] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1" Apr 21 12:03:48.667243 containerd[1827]: time="2026-04-21T12:03:48.667183915Z" level=info msg="TearDown network for sandbox \"34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1\" successfully" Apr 21 12:03:48.667243 containerd[1827]: time="2026-04-21T12:03:48.667218516Z" level=info msg="StopPodSandbox for \"34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1\" returns successfully" Apr 21 12:03:48.669698 containerd[1827]: time="2026-04-21T12:03:48.669327664Z" level=info msg="RemovePodSandbox for \"34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1\"" Apr 21 12:03:48.669698 containerd[1827]: time="2026-04-21T12:03:48.669376165Z" level=info msg="Forcibly stopping sandbox \"34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1\"" Apr 21 12:03:48.801096 containerd[1827]: 2026-04-21 12:03:48.735 [WARNING][5604] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--fffe528a55-k8s-calico--kube--controllers--5fb546f67b--nbfw7-eth0", GenerateName:"calico-kube-controllers-5fb546f67b-", Namespace:"calico-system", SelfLink:"", UID:"5ba14df3-ce7b-4843-8cfc-82850053c280", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5fb546f67b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-fffe528a55", ContainerID:"18afa3b78ddd052670ae2cd4a3741af0c4b294630994fa07001ab28e56ef4143", Pod:"calico-kube-controllers-5fb546f67b-nbfw7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.50.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif4b3a7edc59", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:03:48.801096 containerd[1827]: 2026-04-21 12:03:48.735 [INFO][5604] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1" Apr 21 12:03:48.801096 containerd[1827]: 2026-04-21 12:03:48.735 [INFO][5604] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1" iface="eth0" netns="" Apr 21 12:03:48.801096 containerd[1827]: 2026-04-21 12:03:48.735 [INFO][5604] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1" Apr 21 12:03:48.801096 containerd[1827]: 2026-04-21 12:03:48.735 [INFO][5604] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1" Apr 21 12:03:48.801096 containerd[1827]: 2026-04-21 12:03:48.777 [INFO][5612] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1" HandleID="k8s-pod-network.34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1" Workload="ci--4081.3.7--a--fffe528a55-k8s-calico--kube--controllers--5fb546f67b--nbfw7-eth0" Apr 21 12:03:48.801096 containerd[1827]: 2026-04-21 12:03:48.777 [INFO][5612] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:03:48.801096 containerd[1827]: 2026-04-21 12:03:48.777 [INFO][5612] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:03:48.801096 containerd[1827]: 2026-04-21 12:03:48.788 [WARNING][5612] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1" HandleID="k8s-pod-network.34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1" Workload="ci--4081.3.7--a--fffe528a55-k8s-calico--kube--controllers--5fb546f67b--nbfw7-eth0" Apr 21 12:03:48.801096 containerd[1827]: 2026-04-21 12:03:48.790 [INFO][5612] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1" HandleID="k8s-pod-network.34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1" Workload="ci--4081.3.7--a--fffe528a55-k8s-calico--kube--controllers--5fb546f67b--nbfw7-eth0" Apr 21 12:03:48.801096 containerd[1827]: 2026-04-21 12:03:48.794 [INFO][5612] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:03:48.801096 containerd[1827]: 2026-04-21 12:03:48.797 [INFO][5604] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1" Apr 21 12:03:48.803908 containerd[1827]: time="2026-04-21T12:03:48.803431516Z" level=info msg="TearDown network for sandbox \"34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1\" successfully" Apr 21 12:03:48.812603 containerd[1827]: time="2026-04-21T12:03:48.812553623Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 12:03:48.812751 containerd[1827]: time="2026-04-21T12:03:48.812640825Z" level=info msg="RemovePodSandbox \"34285b3e7789eb47e0cde78e50515b6eb866d1896291007059bb1d6375ca5ab1\" returns successfully" Apr 21 12:03:48.813515 containerd[1827]: time="2026-04-21T12:03:48.813292340Z" level=info msg="StopPodSandbox for \"532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef\"" Apr 21 12:03:49.011743 containerd[1827]: 2026-04-21 12:03:48.875 [WARNING][5626] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--9429z-eth0", GenerateName:"calico-apiserver-6d67f5f978-", Namespace:"calico-system", SelfLink:"", UID:"ee7802c0-cf49-4f20-9d8b-620165e7dc87", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d67f5f978", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-fffe528a55", ContainerID:"da9eb53dfd8db6757fe6ef606b74d1d0252567a6399b81f255ce3db94b3c503e", Pod:"calico-apiserver-6d67f5f978-9429z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9a61259e769", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:03:49.011743 containerd[1827]: 2026-04-21 12:03:48.876 [INFO][5626] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef" Apr 21 12:03:49.011743 containerd[1827]: 2026-04-21 12:03:48.876 [INFO][5626] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef" iface="eth0" netns="" Apr 21 12:03:49.011743 containerd[1827]: 2026-04-21 12:03:48.876 [INFO][5626] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef" Apr 21 12:03:49.011743 containerd[1827]: 2026-04-21 12:03:48.876 [INFO][5626] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef" Apr 21 12:03:49.011743 containerd[1827]: 2026-04-21 12:03:48.978 [INFO][5633] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef" HandleID="k8s-pod-network.532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef" Workload="ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--9429z-eth0" Apr 21 12:03:49.011743 containerd[1827]: 2026-04-21 12:03:48.978 [INFO][5633] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:03:49.011743 containerd[1827]: 2026-04-21 12:03:48.978 [INFO][5633] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:03:49.011743 containerd[1827]: 2026-04-21 12:03:48.993 [WARNING][5633] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef" HandleID="k8s-pod-network.532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef" Workload="ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--9429z-eth0" Apr 21 12:03:49.011743 containerd[1827]: 2026-04-21 12:03:48.993 [INFO][5633] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef" HandleID="k8s-pod-network.532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef" Workload="ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--9429z-eth0" Apr 21 12:03:49.011743 containerd[1827]: 2026-04-21 12:03:49.002 [INFO][5633] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:03:49.011743 containerd[1827]: 2026-04-21 12:03:49.007 [INFO][5626] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef" Apr 21 12:03:49.013802 containerd[1827]: time="2026-04-21T12:03:49.012912582Z" level=info msg="TearDown network for sandbox \"532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef\" successfully" Apr 21 12:03:49.013802 containerd[1827]: time="2026-04-21T12:03:49.012956383Z" level=info msg="StopPodSandbox for \"532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef\" returns successfully" Apr 21 12:03:49.014315 containerd[1827]: time="2026-04-21T12:03:49.014276914Z" level=info msg="RemovePodSandbox for \"532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef\"" Apr 21 12:03:49.014519 containerd[1827]: time="2026-04-21T12:03:49.014318614Z" level=info msg="Forcibly stopping sandbox \"532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef\"" Apr 21 12:03:49.157607 containerd[1827]: 2026-04-21 12:03:49.089 [WARNING][5663] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--9429z-eth0", GenerateName:"calico-apiserver-6d67f5f978-", Namespace:"calico-system", SelfLink:"", UID:"ee7802c0-cf49-4f20-9d8b-620165e7dc87", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d67f5f978", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-fffe528a55", ContainerID:"da9eb53dfd8db6757fe6ef606b74d1d0252567a6399b81f255ce3db94b3c503e", Pod:"calico-apiserver-6d67f5f978-9429z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9a61259e769", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:03:49.157607 containerd[1827]: 2026-04-21 12:03:49.089 [INFO][5663] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef" Apr 21 12:03:49.157607 containerd[1827]: 2026-04-21 12:03:49.089 [INFO][5663] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef" iface="eth0" netns="" Apr 21 12:03:49.157607 containerd[1827]: 2026-04-21 12:03:49.089 [INFO][5663] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef" Apr 21 12:03:49.157607 containerd[1827]: 2026-04-21 12:03:49.089 [INFO][5663] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef" Apr 21 12:03:49.157607 containerd[1827]: 2026-04-21 12:03:49.140 [INFO][5670] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef" HandleID="k8s-pod-network.532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef" Workload="ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--9429z-eth0" Apr 21 12:03:49.157607 containerd[1827]: 2026-04-21 12:03:49.140 [INFO][5670] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:03:49.157607 containerd[1827]: 2026-04-21 12:03:49.140 [INFO][5670] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:03:49.157607 containerd[1827]: 2026-04-21 12:03:49.149 [WARNING][5670] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef" HandleID="k8s-pod-network.532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef" Workload="ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--9429z-eth0" Apr 21 12:03:49.157607 containerd[1827]: 2026-04-21 12:03:49.149 [INFO][5670] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef" HandleID="k8s-pod-network.532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef" Workload="ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--9429z-eth0" Apr 21 12:03:49.157607 containerd[1827]: 2026-04-21 12:03:49.152 [INFO][5670] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:03:49.157607 containerd[1827]: 2026-04-21 12:03:49.154 [INFO][5663] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef" Apr 21 12:03:49.158350 containerd[1827]: time="2026-04-21T12:03:49.157664576Z" level=info msg="TearDown network for sandbox \"532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef\" successfully" Apr 21 12:03:49.168428 containerd[1827]: time="2026-04-21T12:03:49.168368520Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 12:03:49.168636 containerd[1827]: time="2026-04-21T12:03:49.168461522Z" level=info msg="RemovePodSandbox \"532979a194db8c6411878dfa96feea2e8c549f70b33e84c3296e81a80a6768ef\" returns successfully" Apr 21 12:03:49.169478 containerd[1827]: time="2026-04-21T12:03:49.169251940Z" level=info msg="StopPodSandbox for \"27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7\"" Apr 21 12:03:49.287961 containerd[1827]: 2026-04-21 12:03:49.233 [WARNING][5685] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--nkbzx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7be57639-09c2-4414-a7be-12502cbf1c2c", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 2, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-fffe528a55", ContainerID:"ed2c6b34fd97f87a0f176d3946a8fc16511303646303d4d6187784d0c627eb32", Pod:"coredns-674b8bbfcf-nkbzx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8b5560a6a03", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:03:49.287961 containerd[1827]: 2026-04-21 12:03:49.234 [INFO][5685] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7" Apr 21 12:03:49.287961 containerd[1827]: 2026-04-21 12:03:49.234 [INFO][5685] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7" iface="eth0" netns="" Apr 21 12:03:49.287961 containerd[1827]: 2026-04-21 12:03:49.234 [INFO][5685] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7" Apr 21 12:03:49.287961 containerd[1827]: 2026-04-21 12:03:49.234 [INFO][5685] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7" Apr 21 12:03:49.287961 containerd[1827]: 2026-04-21 12:03:49.273 [INFO][5693] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7" HandleID="k8s-pod-network.27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7" Workload="ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--nkbzx-eth0" Apr 21 12:03:49.287961 containerd[1827]: 2026-04-21 12:03:49.273 [INFO][5693] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:03:49.287961 containerd[1827]: 2026-04-21 12:03:49.273 [INFO][5693] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:03:49.287961 containerd[1827]: 2026-04-21 12:03:49.281 [WARNING][5693] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7" HandleID="k8s-pod-network.27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7" Workload="ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--nkbzx-eth0" Apr 21 12:03:49.287961 containerd[1827]: 2026-04-21 12:03:49.281 [INFO][5693] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7" HandleID="k8s-pod-network.27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7" Workload="ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--nkbzx-eth0" Apr 21 12:03:49.287961 containerd[1827]: 2026-04-21 12:03:49.283 [INFO][5693] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:03:49.287961 containerd[1827]: 2026-04-21 12:03:49.285 [INFO][5685] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7" Apr 21 12:03:49.288756 containerd[1827]: time="2026-04-21T12:03:49.288623756Z" level=info msg="TearDown network for sandbox \"27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7\" successfully" Apr 21 12:03:49.288756 containerd[1827]: time="2026-04-21T12:03:49.288661757Z" level=info msg="StopPodSandbox for \"27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7\" returns successfully" Apr 21 12:03:49.290374 containerd[1827]: time="2026-04-21T12:03:49.289873785Z" level=info msg="RemovePodSandbox for \"27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7\"" Apr 21 12:03:49.290374 containerd[1827]: time="2026-04-21T12:03:49.289927286Z" level=info msg="Forcibly stopping sandbox \"27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7\"" Apr 21 12:03:49.400255 containerd[1827]: 2026-04-21 12:03:49.347 [WARNING][5708] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--nkbzx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7be57639-09c2-4414-a7be-12502cbf1c2c", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 2, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-fffe528a55", ContainerID:"ed2c6b34fd97f87a0f176d3946a8fc16511303646303d4d6187784d0c627eb32", Pod:"coredns-674b8bbfcf-nkbzx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8b5560a6a03", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:03:49.400255 containerd[1827]: 2026-04-21 12:03:49.347 [INFO][5708] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7" Apr 21 12:03:49.400255 containerd[1827]: 2026-04-21 12:03:49.347 [INFO][5708] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7" iface="eth0" netns="" Apr 21 12:03:49.400255 containerd[1827]: 2026-04-21 12:03:49.347 [INFO][5708] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7" Apr 21 12:03:49.400255 containerd[1827]: 2026-04-21 12:03:49.347 [INFO][5708] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7" Apr 21 12:03:49.400255 containerd[1827]: 2026-04-21 12:03:49.384 [INFO][5716] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7" HandleID="k8s-pod-network.27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7" Workload="ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--nkbzx-eth0" Apr 21 12:03:49.400255 containerd[1827]: 2026-04-21 12:03:49.384 [INFO][5716] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:03:49.400255 containerd[1827]: 2026-04-21 12:03:49.384 [INFO][5716] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:03:49.400255 containerd[1827]: 2026-04-21 12:03:49.393 [WARNING][5716] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7" HandleID="k8s-pod-network.27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7" Workload="ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--nkbzx-eth0" Apr 21 12:03:49.400255 containerd[1827]: 2026-04-21 12:03:49.393 [INFO][5716] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7" HandleID="k8s-pod-network.27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7" Workload="ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--nkbzx-eth0" Apr 21 12:03:49.400255 containerd[1827]: 2026-04-21 12:03:49.395 [INFO][5716] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:03:49.400255 containerd[1827]: 2026-04-21 12:03:49.397 [INFO][5708] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7" Apr 21 12:03:49.401666 containerd[1827]: time="2026-04-21T12:03:49.401440823Z" level=info msg="TearDown network for sandbox \"27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7\" successfully" Apr 21 12:03:49.410829 containerd[1827]: time="2026-04-21T12:03:49.410256424Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 12:03:49.410829 containerd[1827]: time="2026-04-21T12:03:49.410344026Z" level=info msg="RemovePodSandbox \"27b1bce572eab8c4f827c60692b2838fe3bea46014bb7015125bc9d3361ea1e7\" returns successfully" Apr 21 12:03:49.412360 containerd[1827]: time="2026-04-21T12:03:49.412305471Z" level=info msg="StopPodSandbox for \"fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849\"" Apr 21 12:03:49.535621 containerd[1827]: 2026-04-21 12:03:49.475 [WARNING][5730] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--xpcf7-eth0", GenerateName:"calico-apiserver-6d67f5f978-", Namespace:"calico-system", SelfLink:"", UID:"9ddf0e0e-ea69-4d09-96e5-101c57c477a6", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d67f5f978", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-fffe528a55", ContainerID:"aa4c0a1f284055f598de534dd6046deb1ff9795a1faa1714fe3812e679520685", Pod:"calico-apiserver-6d67f5f978-xpcf7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali3c3c332a8f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:03:49.535621 containerd[1827]: 2026-04-21 12:03:49.475 [INFO][5730] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849" Apr 21 12:03:49.535621 containerd[1827]: 2026-04-21 12:03:49.476 [INFO][5730] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849" iface="eth0" netns="" Apr 21 12:03:49.535621 containerd[1827]: 2026-04-21 12:03:49.476 [INFO][5730] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849" Apr 21 12:03:49.535621 containerd[1827]: 2026-04-21 12:03:49.476 [INFO][5730] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849" Apr 21 12:03:49.535621 containerd[1827]: 2026-04-21 12:03:49.517 [INFO][5737] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849" HandleID="k8s-pod-network.fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849" Workload="ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--xpcf7-eth0" Apr 21 12:03:49.535621 containerd[1827]: 2026-04-21 12:03:49.517 [INFO][5737] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:03:49.535621 containerd[1827]: 2026-04-21 12:03:49.517 [INFO][5737] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:03:49.535621 containerd[1827]: 2026-04-21 12:03:49.527 [WARNING][5737] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849" HandleID="k8s-pod-network.fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849" Workload="ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--xpcf7-eth0" Apr 21 12:03:49.535621 containerd[1827]: 2026-04-21 12:03:49.527 [INFO][5737] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849" HandleID="k8s-pod-network.fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849" Workload="ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--xpcf7-eth0" Apr 21 12:03:49.535621 containerd[1827]: 2026-04-21 12:03:49.530 [INFO][5737] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:03:49.535621 containerd[1827]: 2026-04-21 12:03:49.533 [INFO][5730] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849" Apr 21 12:03:49.536537 containerd[1827]: time="2026-04-21T12:03:49.535766080Z" level=info msg="TearDown network for sandbox \"fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849\" successfully" Apr 21 12:03:49.536537 containerd[1827]: time="2026-04-21T12:03:49.535890583Z" level=info msg="StopPodSandbox for \"fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849\" returns successfully" Apr 21 12:03:49.536879 containerd[1827]: time="2026-04-21T12:03:49.536851605Z" level=info msg="RemovePodSandbox for \"fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849\"" Apr 21 12:03:49.537333 containerd[1827]: time="2026-04-21T12:03:49.536924906Z" level=info msg="Forcibly stopping sandbox \"fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849\"" Apr 21 12:03:49.647567 containerd[1827]: 2026-04-21 12:03:49.597 [WARNING][5751] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--xpcf7-eth0", GenerateName:"calico-apiserver-6d67f5f978-", Namespace:"calico-system", SelfLink:"", UID:"9ddf0e0e-ea69-4d09-96e5-101c57c477a6", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d67f5f978", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-fffe528a55", ContainerID:"aa4c0a1f284055f598de534dd6046deb1ff9795a1faa1714fe3812e679520685", Pod:"calico-apiserver-6d67f5f978-xpcf7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali3c3c332a8f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:03:49.647567 containerd[1827]: 2026-04-21 12:03:49.597 [INFO][5751] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849" Apr 21 12:03:49.647567 containerd[1827]: 2026-04-21 12:03:49.597 [INFO][5751] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849" iface="eth0" netns="" Apr 21 12:03:49.647567 containerd[1827]: 2026-04-21 12:03:49.597 [INFO][5751] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849" Apr 21 12:03:49.647567 containerd[1827]: 2026-04-21 12:03:49.597 [INFO][5751] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849" Apr 21 12:03:49.647567 containerd[1827]: 2026-04-21 12:03:49.633 [INFO][5759] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849" HandleID="k8s-pod-network.fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849" Workload="ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--xpcf7-eth0" Apr 21 12:03:49.647567 containerd[1827]: 2026-04-21 12:03:49.633 [INFO][5759] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:03:49.647567 containerd[1827]: 2026-04-21 12:03:49.633 [INFO][5759] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:03:49.647567 containerd[1827]: 2026-04-21 12:03:49.641 [WARNING][5759] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849" HandleID="k8s-pod-network.fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849" Workload="ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--xpcf7-eth0" Apr 21 12:03:49.647567 containerd[1827]: 2026-04-21 12:03:49.641 [INFO][5759] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849" HandleID="k8s-pod-network.fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849" Workload="ci--4081.3.7--a--fffe528a55-k8s-calico--apiserver--6d67f5f978--xpcf7-eth0" Apr 21 12:03:49.647567 containerd[1827]: 2026-04-21 12:03:49.643 [INFO][5759] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:03:49.647567 containerd[1827]: 2026-04-21 12:03:49.644 [INFO][5751] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849" Apr 21 12:03:49.648548 containerd[1827]: time="2026-04-21T12:03:49.648389343Z" level=info msg="TearDown network for sandbox \"fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849\" successfully" Apr 21 12:03:49.658405 containerd[1827]: time="2026-04-21T12:03:49.658357270Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 12:03:49.658617 containerd[1827]: time="2026-04-21T12:03:49.658441371Z" level=info msg="RemovePodSandbox \"fe69ff3c309286e5d8a0b70df1b43c3cdd8c54f8a6ec2e5495989f4e52d55849\" returns successfully" Apr 21 12:03:49.863386 containerd[1827]: time="2026-04-21T12:03:49.863329334Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:03:49.866440 containerd[1827]: time="2026-04-21T12:03:49.866360803Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 21 12:03:49.869726 containerd[1827]: time="2026-04-21T12:03:49.869640977Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:03:49.874451 containerd[1827]: time="2026-04-21T12:03:49.874374085Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:03:49.875294 containerd[1827]: time="2026-04-21T12:03:49.875138302Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 4.087993427s" Apr 21 12:03:49.875294 containerd[1827]: time="2026-04-21T12:03:49.875187204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 21 12:03:49.876848 containerd[1827]: time="2026-04-21T12:03:49.876560135Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 21 12:03:49.905183 containerd[1827]: time="2026-04-21T12:03:49.905130185Z" level=info msg="CreateContainer within sandbox \"18afa3b78ddd052670ae2cd4a3741af0c4b294630994fa07001ab28e56ef4143\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 21 12:03:49.935261 containerd[1827]: time="2026-04-21T12:03:49.935202769Z" level=info msg="CreateContainer within sandbox \"18afa3b78ddd052670ae2cd4a3741af0c4b294630994fa07001ab28e56ef4143\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"0b60b0bf2a51d3b1cb24641469f9856c43ec0091edff942f402cf51d5541c601\"" Apr 21 12:03:49.936007 containerd[1827]: time="2026-04-21T12:03:49.935938786Z" level=info msg="StartContainer for \"0b60b0bf2a51d3b1cb24641469f9856c43ec0091edff942f402cf51d5541c601\"" Apr 21 12:03:50.027629 containerd[1827]: time="2026-04-21T12:03:50.027567271Z" level=info msg="StartContainer for \"0b60b0bf2a51d3b1cb24641469f9856c43ec0091edff942f402cf51d5541c601\" returns successfully" Apr 21 12:03:50.269939 kubelet[3337]: I0421 12:03:50.269729 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5fb546f67b-nbfw7" podStartSLOduration=31.06472393 podStartE2EDuration="38.268766859s" podCreationTimestamp="2026-04-21 12:03:12 +0000 UTC" firstStartedPulling="2026-04-21 12:03:42.6722759 +0000 UTC m=+54.919189550" lastFinishedPulling="2026-04-21 12:03:49.876318829 +0000 UTC m=+62.123232479" observedRunningTime="2026-04-21 12:03:50.267248325 +0000 UTC m=+62.514161975" watchObservedRunningTime="2026-04-21 12:03:50.268766859 +0000 UTC m=+62.515680509" Apr 21 12:03:52.458736 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2755202054.mount: Deactivated successfully. Apr 21 12:03:52.865857 containerd[1827]: time="2026-04-21T12:03:52.864533526Z" level=info msg="StopPodSandbox for \"856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d\"" Apr 21 12:03:52.868893 containerd[1827]: time="2026-04-21T12:03:52.865899057Z" level=info msg="StopPodSandbox for \"60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96\"" Apr 21 12:03:53.050281 containerd[1827]: 2026-04-21 12:03:52.979 [INFO][5856] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d" Apr 21 12:03:53.050281 containerd[1827]: 2026-04-21 12:03:52.981 [INFO][5856] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d" iface="eth0" netns="/var/run/netns/cni-35068665-1174-d0a9-cf9c-17e73b984fb7" Apr 21 12:03:53.050281 containerd[1827]: 2026-04-21 12:03:52.981 [INFO][5856] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d" iface="eth0" netns="/var/run/netns/cni-35068665-1174-d0a9-cf9c-17e73b984fb7" Apr 21 12:03:53.050281 containerd[1827]: 2026-04-21 12:03:52.985 [INFO][5856] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d" iface="eth0" netns="/var/run/netns/cni-35068665-1174-d0a9-cf9c-17e73b984fb7" Apr 21 12:03:53.050281 containerd[1827]: 2026-04-21 12:03:52.985 [INFO][5856] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d" Apr 21 12:03:53.050281 containerd[1827]: 2026-04-21 12:03:52.985 [INFO][5856] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d" Apr 21 12:03:53.050281 containerd[1827]: 2026-04-21 12:03:53.035 [INFO][5872] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d" HandleID="k8s-pod-network.856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d" Workload="ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--6mm7f-eth0" Apr 21 12:03:53.050281 containerd[1827]: 2026-04-21 12:03:53.035 [INFO][5872] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:03:53.050281 containerd[1827]: 2026-04-21 12:03:53.035 [INFO][5872] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:03:53.050281 containerd[1827]: 2026-04-21 12:03:53.044 [WARNING][5872] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d" HandleID="k8s-pod-network.856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d" Workload="ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--6mm7f-eth0" Apr 21 12:03:53.050281 containerd[1827]: 2026-04-21 12:03:53.045 [INFO][5872] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d" HandleID="k8s-pod-network.856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d" Workload="ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--6mm7f-eth0" Apr 21 12:03:53.050281 containerd[1827]: 2026-04-21 12:03:53.047 [INFO][5872] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:03:53.050281 containerd[1827]: 2026-04-21 12:03:53.048 [INFO][5856] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d" Apr 21 12:03:53.054603 containerd[1827]: time="2026-04-21T12:03:53.053830634Z" level=info msg="TearDown network for sandbox \"856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d\" successfully" Apr 21 12:03:53.055060 containerd[1827]: time="2026-04-21T12:03:53.054845057Z" level=info msg="StopPodSandbox for \"856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d\" returns successfully" Apr 21 12:03:53.058731 containerd[1827]: time="2026-04-21T12:03:53.058698844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6mm7f,Uid:99e71ca6-4515-43f1-9773-fcdfc5a40507,Namespace:kube-system,Attempt:1,}" Apr 21 12:03:53.061656 systemd[1]: run-netns-cni\x2d35068665\x2d1174\x2dd0a9\x2dcf9c\x2d17e73b984fb7.mount: Deactivated successfully. Apr 21 12:03:53.069138 containerd[1827]: 2026-04-21 12:03:52.979 [INFO][5857] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96" Apr 21 12:03:53.069138 containerd[1827]: 2026-04-21 12:03:52.981 [INFO][5857] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96" iface="eth0" netns="/var/run/netns/cni-424a17e2-18ce-9214-7acc-ecae13ebfa16" Apr 21 12:03:53.069138 containerd[1827]: 2026-04-21 12:03:52.984 [INFO][5857] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96" iface="eth0" netns="/var/run/netns/cni-424a17e2-18ce-9214-7acc-ecae13ebfa16" Apr 21 12:03:53.069138 containerd[1827]: 2026-04-21 12:03:52.986 [INFO][5857] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96" iface="eth0" netns="/var/run/netns/cni-424a17e2-18ce-9214-7acc-ecae13ebfa16" Apr 21 12:03:53.069138 containerd[1827]: 2026-04-21 12:03:52.986 [INFO][5857] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96" Apr 21 12:03:53.069138 containerd[1827]: 2026-04-21 12:03:52.986 [INFO][5857] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96" Apr 21 12:03:53.069138 containerd[1827]: 2026-04-21 12:03:53.041 [INFO][5871] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96" HandleID="k8s-pod-network.60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96" Workload="ci--4081.3.7--a--fffe528a55-k8s-csi--node--driver--4vbvh-eth0" Apr 21 12:03:53.069138 containerd[1827]: 2026-04-21 12:03:53.042 [INFO][5871] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:03:53.069138 containerd[1827]: 2026-04-21 12:03:53.047 [INFO][5871] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:03:53.069138 containerd[1827]: 2026-04-21 12:03:53.056 [WARNING][5871] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96" HandleID="k8s-pod-network.60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96" Workload="ci--4081.3.7--a--fffe528a55-k8s-csi--node--driver--4vbvh-eth0" Apr 21 12:03:53.069138 containerd[1827]: 2026-04-21 12:03:53.056 [INFO][5871] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96" HandleID="k8s-pod-network.60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96" Workload="ci--4081.3.7--a--fffe528a55-k8s-csi--node--driver--4vbvh-eth0" Apr 21 12:03:53.069138 containerd[1827]: 2026-04-21 12:03:53.060 [INFO][5871] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:03:53.069138 containerd[1827]: 2026-04-21 12:03:53.064 [INFO][5857] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96" Apr 21 12:03:53.069138 containerd[1827]: time="2026-04-21T12:03:53.068657771Z" level=info msg="TearDown network for sandbox \"60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96\" successfully" Apr 21 12:03:53.069138 containerd[1827]: time="2026-04-21T12:03:53.068715972Z" level=info msg="StopPodSandbox for \"60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96\" returns successfully" Apr 21 12:03:53.075449 systemd[1]: run-netns-cni\x2d424a17e2\x2d18ce\x2d9214\x2d7acc\x2decae13ebfa16.mount: Deactivated successfully. Apr 21 12:03:53.076676 containerd[1827]: time="2026-04-21T12:03:53.076287945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4vbvh,Uid:63866721-d13a-44fd-83f4-80f75344c988,Namespace:calico-system,Attempt:1,}" Apr 21 12:03:53.138254 containerd[1827]: time="2026-04-21T12:03:53.138007949Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:03:53.149463 containerd[1827]: time="2026-04-21T12:03:53.148160780Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 21 12:03:53.155431 containerd[1827]: time="2026-04-21T12:03:53.152423577Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:03:53.162546 containerd[1827]: time="2026-04-21T12:03:53.162483406Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:03:53.163517 containerd[1827]: time="2026-04-21T12:03:53.163453628Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 3.286823592s" Apr 21 12:03:53.163696 containerd[1827]: time="2026-04-21T12:03:53.163674833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 21 12:03:53.168254 containerd[1827]: time="2026-04-21T12:03:53.168220237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 21 12:03:53.179964 containerd[1827]: time="2026-04-21T12:03:53.179906903Z" level=info msg="CreateContainer within sandbox \"cd7a9a5460c3c690530fef9d74684c31d08e79684e0c9163febf7f4ed99f7fc9\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 21 12:03:53.234030 containerd[1827]: time="2026-04-21T12:03:53.233972633Z" level=info msg="CreateContainer within sandbox \"cd7a9a5460c3c690530fef9d74684c31d08e79684e0c9163febf7f4ed99f7fc9\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"39d57e20c5dea5849bc44cee43157e9312eaa9813c7909aef4fae85f1bafaab4\"" Apr 21 12:03:53.236437 containerd[1827]: time="2026-04-21T12:03:53.236390488Z" level=info msg="StartContainer for \"39d57e20c5dea5849bc44cee43157e9312eaa9813c7909aef4fae85f1bafaab4\"" Apr 21 12:03:53.364551 systemd-networkd[1402]: cali8f5fea27050: Link UP Apr 21 12:03:53.367926 systemd-networkd[1402]: cali8f5fea27050: Gained carrier Apr 21 12:03:53.391368 containerd[1827]: time="2026-04-21T12:03:53.390090785Z" level=info msg="StartContainer for \"39d57e20c5dea5849bc44cee43157e9312eaa9813c7909aef4fae85f1bafaab4\" returns successfully" Apr 21 12:03:53.398579 containerd[1827]: 2026-04-21 12:03:53.233 [INFO][5898] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.7--a--fffe528a55-k8s-csi--node--driver--4vbvh-eth0 csi-node-driver- calico-system 63866721-d13a-44fd-83f4-80f75344c988 1058 0 2026-04-21 12:03:12 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.7-a-fffe528a55 csi-node-driver-4vbvh eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8f5fea27050 [] [] }} ContainerID="f976b70bd728bcc831cc42d2eb4989e66e7bba1db961e74e8340f329533c09d8" Namespace="calico-system" Pod="csi-node-driver-4vbvh" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-csi--node--driver--4vbvh-" Apr 21 12:03:53.398579 containerd[1827]: 2026-04-21 12:03:53.233 [INFO][5898] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f976b70bd728bcc831cc42d2eb4989e66e7bba1db961e74e8340f329533c09d8" Namespace="calico-system" Pod="csi-node-driver-4vbvh" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-csi--node--driver--4vbvh-eth0" Apr 21 12:03:53.398579 containerd[1827]: 2026-04-21 12:03:53.283 [INFO][5918] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f976b70bd728bcc831cc42d2eb4989e66e7bba1db961e74e8340f329533c09d8" HandleID="k8s-pod-network.f976b70bd728bcc831cc42d2eb4989e66e7bba1db961e74e8340f329533c09d8" Workload="ci--4081.3.7--a--fffe528a55-k8s-csi--node--driver--4vbvh-eth0" Apr 21 12:03:53.398579 containerd[1827]: 2026-04-21 12:03:53.298 [INFO][5918] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f976b70bd728bcc831cc42d2eb4989e66e7bba1db961e74e8340f329533c09d8" HandleID="k8s-pod-network.f976b70bd728bcc831cc42d2eb4989e66e7bba1db961e74e8340f329533c09d8" Workload="ci--4081.3.7--a--fffe528a55-k8s-csi--node--driver--4vbvh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fde80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.7-a-fffe528a55", "pod":"csi-node-driver-4vbvh", "timestamp":"2026-04-21 12:03:53.283176552 +0000 UTC"}, Hostname:"ci-4081.3.7-a-fffe528a55", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000186f20)} Apr 21 12:03:53.398579 containerd[1827]: 2026-04-21 12:03:53.298 [INFO][5918] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:03:53.398579 containerd[1827]: 2026-04-21 12:03:53.298 [INFO][5918] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:03:53.398579 containerd[1827]: 2026-04-21 12:03:53.299 [INFO][5918] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.7-a-fffe528a55' Apr 21 12:03:53.398579 containerd[1827]: 2026-04-21 12:03:53.302 [INFO][5918] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f976b70bd728bcc831cc42d2eb4989e66e7bba1db961e74e8340f329533c09d8" host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:53.398579 containerd[1827]: 2026-04-21 12:03:53.313 [INFO][5918] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:53.398579 containerd[1827]: 2026-04-21 12:03:53.318 [INFO][5918] ipam/ipam.go 526: Trying affinity for 192.168.50.0/26 host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:53.398579 containerd[1827]: 2026-04-21 12:03:53.321 [INFO][5918] ipam/ipam.go 160: Attempting to load block cidr=192.168.50.0/26 host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:53.398579 containerd[1827]: 2026-04-21 12:03:53.324 [INFO][5918] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.50.0/26 host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:53.398579 containerd[1827]: 2026-04-21 12:03:53.324 [INFO][5918] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.50.0/26 handle="k8s-pod-network.f976b70bd728bcc831cc42d2eb4989e66e7bba1db961e74e8340f329533c09d8" host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:53.398579 containerd[1827]: 2026-04-21 12:03:53.326 [INFO][5918] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f976b70bd728bcc831cc42d2eb4989e66e7bba1db961e74e8340f329533c09d8 Apr 21 12:03:53.398579 containerd[1827]: 2026-04-21 12:03:53.338 [INFO][5918] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.50.0/26 handle="k8s-pod-network.f976b70bd728bcc831cc42d2eb4989e66e7bba1db961e74e8340f329533c09d8" host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:53.398579 containerd[1827]: 2026-04-21 12:03:53.353 [INFO][5918] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.50.7/26] block=192.168.50.0/26 handle="k8s-pod-network.f976b70bd728bcc831cc42d2eb4989e66e7bba1db961e74e8340f329533c09d8" host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:53.398579 containerd[1827]: 2026-04-21 12:03:53.354 [INFO][5918] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.50.7/26] handle="k8s-pod-network.f976b70bd728bcc831cc42d2eb4989e66e7bba1db961e74e8340f329533c09d8" host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:53.398579 containerd[1827]: 2026-04-21 12:03:53.354 [INFO][5918] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:03:53.398579 containerd[1827]: 2026-04-21 12:03:53.354 [INFO][5918] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.50.7/26] IPv6=[] ContainerID="f976b70bd728bcc831cc42d2eb4989e66e7bba1db961e74e8340f329533c09d8" HandleID="k8s-pod-network.f976b70bd728bcc831cc42d2eb4989e66e7bba1db961e74e8340f329533c09d8" Workload="ci--4081.3.7--a--fffe528a55-k8s-csi--node--driver--4vbvh-eth0" Apr 21 12:03:53.399569 containerd[1827]: 2026-04-21 12:03:53.359 [INFO][5898] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f976b70bd728bcc831cc42d2eb4989e66e7bba1db961e74e8340f329533c09d8" Namespace="calico-system" Pod="csi-node-driver-4vbvh" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-csi--node--driver--4vbvh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--fffe528a55-k8s-csi--node--driver--4vbvh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"63866721-d13a-44fd-83f4-80f75344c988", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-fffe528a55", ContainerID:"", Pod:"csi-node-driver-4vbvh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.50.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8f5fea27050", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:03:53.399569 containerd[1827]: 2026-04-21 12:03:53.359 [INFO][5898] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.7/32] ContainerID="f976b70bd728bcc831cc42d2eb4989e66e7bba1db961e74e8340f329533c09d8" Namespace="calico-system" Pod="csi-node-driver-4vbvh" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-csi--node--driver--4vbvh-eth0" Apr 21 12:03:53.399569 containerd[1827]: 2026-04-21 12:03:53.360 [INFO][5898] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8f5fea27050 ContainerID="f976b70bd728bcc831cc42d2eb4989e66e7bba1db961e74e8340f329533c09d8" Namespace="calico-system" Pod="csi-node-driver-4vbvh" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-csi--node--driver--4vbvh-eth0" Apr 21 12:03:53.399569 containerd[1827]: 2026-04-21 12:03:53.363 [INFO][5898] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f976b70bd728bcc831cc42d2eb4989e66e7bba1db961e74e8340f329533c09d8" Namespace="calico-system" Pod="csi-node-driver-4vbvh" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-csi--node--driver--4vbvh-eth0" Apr 21 12:03:53.399569 containerd[1827]: 2026-04-21 12:03:53.363 [INFO][5898] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f976b70bd728bcc831cc42d2eb4989e66e7bba1db961e74e8340f329533c09d8" Namespace="calico-system" Pod="csi-node-driver-4vbvh" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-csi--node--driver--4vbvh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--fffe528a55-k8s-csi--node--driver--4vbvh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"63866721-d13a-44fd-83f4-80f75344c988", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-fffe528a55", ContainerID:"f976b70bd728bcc831cc42d2eb4989e66e7bba1db961e74e8340f329533c09d8", Pod:"csi-node-driver-4vbvh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.50.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8f5fea27050", MAC:"1a:cb:95:9b:a9:94", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:03:53.399569 containerd[1827]: 2026-04-21 12:03:53.390 [INFO][5898] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f976b70bd728bcc831cc42d2eb4989e66e7bba1db961e74e8340f329533c09d8" Namespace="calico-system" Pod="csi-node-driver-4vbvh" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-csi--node--driver--4vbvh-eth0" Apr 21 12:03:53.475724 containerd[1827]: time="2026-04-21T12:03:53.474959917Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 12:03:53.477514 containerd[1827]: time="2026-04-21T12:03:53.476516452Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 12:03:53.477514 containerd[1827]: time="2026-04-21T12:03:53.476562353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:03:53.477514 containerd[1827]: time="2026-04-21T12:03:53.476688756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:03:53.515252 systemd-networkd[1402]: cali3fdd9ab4176: Link UP Apr 21 12:03:53.523751 systemd-networkd[1402]: cali3fdd9ab4176: Gained carrier Apr 21 12:03:53.584271 containerd[1827]: 2026-04-21 12:03:53.218 [INFO][5887] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--6mm7f-eth0 coredns-674b8bbfcf- kube-system 99e71ca6-4515-43f1-9773-fcdfc5a40507 1057 0 2026-04-21 12:02:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.7-a-fffe528a55 coredns-674b8bbfcf-6mm7f eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3fdd9ab4176 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="dcf4f45ae299f5d7cb549eec7abeddf80c5ed108887ddfed651ebaa45b50fdf0" Namespace="kube-system" Pod="coredns-674b8bbfcf-6mm7f" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--6mm7f-" Apr 21 12:03:53.584271 containerd[1827]: 2026-04-21 12:03:53.218 [INFO][5887] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dcf4f45ae299f5d7cb549eec7abeddf80c5ed108887ddfed651ebaa45b50fdf0" Namespace="kube-system" Pod="coredns-674b8bbfcf-6mm7f" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--6mm7f-eth0" Apr 21 12:03:53.584271 containerd[1827]: 2026-04-21 12:03:53.297 [INFO][5912] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dcf4f45ae299f5d7cb549eec7abeddf80c5ed108887ddfed651ebaa45b50fdf0" HandleID="k8s-pod-network.dcf4f45ae299f5d7cb549eec7abeddf80c5ed108887ddfed651ebaa45b50fdf0" Workload="ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--6mm7f-eth0" Apr 21 12:03:53.584271 containerd[1827]: 2026-04-21 12:03:53.311 [INFO][5912] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="dcf4f45ae299f5d7cb549eec7abeddf80c5ed108887ddfed651ebaa45b50fdf0" HandleID="k8s-pod-network.dcf4f45ae299f5d7cb549eec7abeddf80c5ed108887ddfed651ebaa45b50fdf0" Workload="ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--6mm7f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000125a10), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.7-a-fffe528a55", "pod":"coredns-674b8bbfcf-6mm7f", "timestamp":"2026-04-21 12:03:53.297970989 +0000 UTC"}, Hostname:"ci-4081.3.7-a-fffe528a55", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004bd1e0)} Apr 21 12:03:53.584271 containerd[1827]: 2026-04-21 12:03:53.312 [INFO][5912] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:03:53.584271 containerd[1827]: 2026-04-21 12:03:53.354 [INFO][5912] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:03:53.584271 containerd[1827]: 2026-04-21 12:03:53.354 [INFO][5912] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.7-a-fffe528a55' Apr 21 12:03:53.584271 containerd[1827]: 2026-04-21 12:03:53.412 [INFO][5912] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.dcf4f45ae299f5d7cb549eec7abeddf80c5ed108887ddfed651ebaa45b50fdf0" host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:53.584271 containerd[1827]: 2026-04-21 12:03:53.432 [INFO][5912] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:53.584271 containerd[1827]: 2026-04-21 12:03:53.446 [INFO][5912] ipam/ipam.go 526: Trying affinity for 192.168.50.0/26 host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:53.584271 containerd[1827]: 2026-04-21 12:03:53.452 [INFO][5912] ipam/ipam.go 160: Attempting to load block cidr=192.168.50.0/26 host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:53.584271 containerd[1827]: 2026-04-21 12:03:53.456 [INFO][5912] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.50.0/26 host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:53.584271 containerd[1827]: 2026-04-21 12:03:53.457 [INFO][5912] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.50.0/26 handle="k8s-pod-network.dcf4f45ae299f5d7cb549eec7abeddf80c5ed108887ddfed651ebaa45b50fdf0" host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:53.584271 containerd[1827]: 2026-04-21 12:03:53.458 [INFO][5912] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.dcf4f45ae299f5d7cb549eec7abeddf80c5ed108887ddfed651ebaa45b50fdf0 Apr 21 12:03:53.584271 containerd[1827]: 2026-04-21 12:03:53.468 [INFO][5912] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.50.0/26 handle="k8s-pod-network.dcf4f45ae299f5d7cb549eec7abeddf80c5ed108887ddfed651ebaa45b50fdf0" host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:53.584271 containerd[1827]: 2026-04-21 12:03:53.484 [INFO][5912] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.50.8/26] block=192.168.50.0/26 handle="k8s-pod-network.dcf4f45ae299f5d7cb549eec7abeddf80c5ed108887ddfed651ebaa45b50fdf0" host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:53.584271 containerd[1827]: 2026-04-21 12:03:53.484 [INFO][5912] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.50.8/26] handle="k8s-pod-network.dcf4f45ae299f5d7cb549eec7abeddf80c5ed108887ddfed651ebaa45b50fdf0" host="ci-4081.3.7-a-fffe528a55" Apr 21 12:03:53.584271 containerd[1827]: 2026-04-21 12:03:53.484 [INFO][5912] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:03:53.584271 containerd[1827]: 2026-04-21 12:03:53.484 [INFO][5912] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.50.8/26] IPv6=[] ContainerID="dcf4f45ae299f5d7cb549eec7abeddf80c5ed108887ddfed651ebaa45b50fdf0" HandleID="k8s-pod-network.dcf4f45ae299f5d7cb549eec7abeddf80c5ed108887ddfed651ebaa45b50fdf0" Workload="ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--6mm7f-eth0" Apr 21 12:03:53.590257 containerd[1827]: 2026-04-21 12:03:53.495 [INFO][5887] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dcf4f45ae299f5d7cb549eec7abeddf80c5ed108887ddfed651ebaa45b50fdf0" Namespace="kube-system" Pod="coredns-674b8bbfcf-6mm7f" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--6mm7f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--6mm7f-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"99e71ca6-4515-43f1-9773-fcdfc5a40507", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 2, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-fffe528a55", ContainerID:"", Pod:"coredns-674b8bbfcf-6mm7f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3fdd9ab4176", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:03:53.590257 containerd[1827]: 2026-04-21 12:03:53.495 [INFO][5887] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.8/32] ContainerID="dcf4f45ae299f5d7cb549eec7abeddf80c5ed108887ddfed651ebaa45b50fdf0" Namespace="kube-system" Pod="coredns-674b8bbfcf-6mm7f" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--6mm7f-eth0" Apr 21 12:03:53.590257 containerd[1827]: 2026-04-21 12:03:53.495 [INFO][5887] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3fdd9ab4176 ContainerID="dcf4f45ae299f5d7cb549eec7abeddf80c5ed108887ddfed651ebaa45b50fdf0" Namespace="kube-system" Pod="coredns-674b8bbfcf-6mm7f" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--6mm7f-eth0" Apr 21 12:03:53.590257 containerd[1827]: 2026-04-21 12:03:53.514 [INFO][5887] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dcf4f45ae299f5d7cb549eec7abeddf80c5ed108887ddfed651ebaa45b50fdf0" Namespace="kube-system" Pod="coredns-674b8bbfcf-6mm7f" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--6mm7f-eth0" Apr 21 12:03:53.590257 containerd[1827]: 2026-04-21 12:03:53.531 [INFO][5887] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dcf4f45ae299f5d7cb549eec7abeddf80c5ed108887ddfed651ebaa45b50fdf0" Namespace="kube-system" Pod="coredns-674b8bbfcf-6mm7f" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--6mm7f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--6mm7f-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"99e71ca6-4515-43f1-9773-fcdfc5a40507", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 2, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-fffe528a55", ContainerID:"dcf4f45ae299f5d7cb549eec7abeddf80c5ed108887ddfed651ebaa45b50fdf0", Pod:"coredns-674b8bbfcf-6mm7f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3fdd9ab4176", MAC:"5e:28:d9:79:5d:a3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:03:53.590257 containerd[1827]: 2026-04-21 12:03:53.559 [INFO][5887] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dcf4f45ae299f5d7cb549eec7abeddf80c5ed108887ddfed651ebaa45b50fdf0" Namespace="kube-system" Pod="coredns-674b8bbfcf-6mm7f" WorkloadEndpoint="ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--6mm7f-eth0" Apr 21 12:03:53.654142 containerd[1827]: time="2026-04-21T12:03:53.653999791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4vbvh,Uid:63866721-d13a-44fd-83f4-80f75344c988,Namespace:calico-system,Attempt:1,} returns sandbox id \"f976b70bd728bcc831cc42d2eb4989e66e7bba1db961e74e8340f329533c09d8\"" Apr 21 12:03:53.675709 containerd[1827]: time="2026-04-21T12:03:53.675371877Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 12:03:53.675709 containerd[1827]: time="2026-04-21T12:03:53.675457179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 12:03:53.675709 containerd[1827]: time="2026-04-21T12:03:53.675573681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:03:53.677274 containerd[1827]: time="2026-04-21T12:03:53.677169918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:03:53.750877 containerd[1827]: time="2026-04-21T12:03:53.750823494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6mm7f,Uid:99e71ca6-4515-43f1-9773-fcdfc5a40507,Namespace:kube-system,Attempt:1,} returns sandbox id \"dcf4f45ae299f5d7cb549eec7abeddf80c5ed108887ddfed651ebaa45b50fdf0\"" Apr 21 12:03:53.760155 containerd[1827]: time="2026-04-21T12:03:53.760091905Z" level=info msg="CreateContainer within sandbox \"dcf4f45ae299f5d7cb549eec7abeddf80c5ed108887ddfed651ebaa45b50fdf0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 12:03:53.791933 containerd[1827]: time="2026-04-21T12:03:53.791877328Z" level=info msg="CreateContainer within sandbox \"dcf4f45ae299f5d7cb549eec7abeddf80c5ed108887ddfed651ebaa45b50fdf0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"265acc4483325ab5def0f2dabc1602cde9c1ec7d6134546059d6fea3e8ec4a96\"" Apr 21 12:03:53.792634 containerd[1827]: time="2026-04-21T12:03:53.792586844Z" level=info msg="StartContainer for \"265acc4483325ab5def0f2dabc1602cde9c1ec7d6134546059d6fea3e8ec4a96\"" Apr 21 12:03:53.852943 containerd[1827]: time="2026-04-21T12:03:53.852821515Z" level=info msg="StartContainer for \"265acc4483325ab5def0f2dabc1602cde9c1ec7d6134546059d6fea3e8ec4a96\" returns successfully" Apr 21 12:03:54.305168 kubelet[3337]: I0421 12:03:54.304957 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-xldpb" podStartSLOduration=33.841751013 podStartE2EDuration="44.304921202s" podCreationTimestamp="2026-04-21 12:03:10 +0000 UTC" firstStartedPulling="2026-04-21 12:03:42.702421388 +0000 UTC m=+54.949335038" lastFinishedPulling="2026-04-21 12:03:53.165591577 +0000 UTC m=+65.412505227" observedRunningTime="2026-04-21 12:03:54.301662028 +0000 UTC m=+66.548575678" watchObservedRunningTime="2026-04-21 12:03:54.304921202 +0000 UTC m=+66.551834852" Apr 21 12:03:54.974325 containerd[1827]: time="2026-04-21T12:03:54.974265733Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:03:54.976798 containerd[1827]: time="2026-04-21T12:03:54.976723989Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 21 12:03:54.979812 containerd[1827]: time="2026-04-21T12:03:54.979747058Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:03:54.984717 containerd[1827]: time="2026-04-21T12:03:54.984656570Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:03:54.985606 containerd[1827]: time="2026-04-21T12:03:54.985438187Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.816071025s" Apr 21 12:03:54.985606 containerd[1827]: time="2026-04-21T12:03:54.985479688Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 21 12:03:54.987864 containerd[1827]: time="2026-04-21T12:03:54.987162627Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 21 12:03:54.992859 containerd[1827]: time="2026-04-21T12:03:54.992828456Z" level=info msg="CreateContainer within sandbox \"7e38b7cdf75dd20b0ccb323cbf9aeb3ea30d8b1e580bc68e83ed95a269ce9af4\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 21 12:03:55.033258 containerd[1827]: time="2026-04-21T12:03:55.033203874Z" level=info msg="CreateContainer within sandbox \"7e38b7cdf75dd20b0ccb323cbf9aeb3ea30d8b1e580bc68e83ed95a269ce9af4\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"3bc9a0dc7891fc3178fdf53c45ee6858a973e8a47337d00b85681c2e3aec2873\"" Apr 21 12:03:55.034065 containerd[1827]: time="2026-04-21T12:03:55.033972292Z" level=info msg="StartContainer for \"3bc9a0dc7891fc3178fdf53c45ee6858a973e8a47337d00b85681c2e3aec2873\"" Apr 21 12:03:55.118778 containerd[1827]: time="2026-04-21T12:03:55.118716820Z" level=info msg="StartContainer for \"3bc9a0dc7891fc3178fdf53c45ee6858a973e8a47337d00b85681c2e3aec2873\" returns successfully" Apr 21 12:03:55.311394 systemd[1]: run-containerd-runc-k8s.io-39d57e20c5dea5849bc44cee43157e9312eaa9813c7909aef4fae85f1bafaab4-runc.wibqsg.mount: Deactivated successfully. Apr 21 12:03:55.373036 systemd-networkd[1402]: cali3fdd9ab4176: Gained IPv6LL Apr 21 12:03:55.373451 systemd-networkd[1402]: cali8f5fea27050: Gained IPv6LL Apr 21 12:03:56.310287 systemd[1]: run-containerd-runc-k8s.io-39d57e20c5dea5849bc44cee43157e9312eaa9813c7909aef4fae85f1bafaab4-runc.4CeRp8.mount: Deactivated successfully. Apr 21 12:03:56.784451 containerd[1827]: time="2026-04-21T12:03:56.784291873Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:03:56.787555 containerd[1827]: time="2026-04-21T12:03:56.787485545Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 21 12:03:56.791284 containerd[1827]: time="2026-04-21T12:03:56.790976324Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:03:56.797149 containerd[1827]: time="2026-04-21T12:03:56.796972360Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:03:56.798154 containerd[1827]: time="2026-04-21T12:03:56.797677376Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.810477649s" Apr 21 12:03:56.798154 containerd[1827]: time="2026-04-21T12:03:56.797720577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 21 12:03:56.799978 containerd[1827]: time="2026-04-21T12:03:56.799715323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 21 12:03:56.806479 containerd[1827]: time="2026-04-21T12:03:56.806438575Z" level=info msg="CreateContainer within sandbox \"f976b70bd728bcc831cc42d2eb4989e66e7bba1db961e74e8340f329533c09d8\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 21 12:03:56.851743 containerd[1827]: time="2026-04-21T12:03:56.851680603Z" level=info msg="CreateContainer within sandbox \"f976b70bd728bcc831cc42d2eb4989e66e7bba1db961e74e8340f329533c09d8\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"77e20eeb14245c4dabc1bda1ee444c126e2a67232876b7bf327b9e2a3bf16e27\"" Apr 21 12:03:56.852719 containerd[1827]: time="2026-04-21T12:03:56.852593623Z" level=info msg="StartContainer for \"77e20eeb14245c4dabc1bda1ee444c126e2a67232876b7bf327b9e2a3bf16e27\"" Apr 21 12:03:56.933361 containerd[1827]: time="2026-04-21T12:03:56.933226754Z" level=info msg="StartContainer for \"77e20eeb14245c4dabc1bda1ee444c126e2a67232876b7bf327b9e2a3bf16e27\" returns successfully" Apr 21 12:03:59.053266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1740355692.mount: Deactivated successfully. Apr 21 12:03:59.112442 containerd[1827]: time="2026-04-21T12:03:59.112379237Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:03:59.115948 containerd[1827]: time="2026-04-21T12:03:59.115765614Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 21 12:03:59.120092 containerd[1827]: time="2026-04-21T12:03:59.119781306Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:03:59.124782 containerd[1827]: time="2026-04-21T12:03:59.124714618Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:03:59.125729 containerd[1827]: time="2026-04-21T12:03:59.125416733Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 2.32565991s" Apr 21 12:03:59.125729 containerd[1827]: time="2026-04-21T12:03:59.125458934Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 21 12:03:59.127843 containerd[1827]: time="2026-04-21T12:03:59.127127172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 21 12:03:59.134563 containerd[1827]: time="2026-04-21T12:03:59.134530240Z" level=info msg="CreateContainer within sandbox \"7e38b7cdf75dd20b0ccb323cbf9aeb3ea30d8b1e580bc68e83ed95a269ce9af4\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 21 12:03:59.171060 containerd[1827]: time="2026-04-21T12:03:59.170998069Z" level=info msg="CreateContainer within sandbox \"7e38b7cdf75dd20b0ccb323cbf9aeb3ea30d8b1e580bc68e83ed95a269ce9af4\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"e3c0a5bc219ddee78296150670a0558be0cf5e795192aa2e382765fae19794e6\"" Apr 21 12:03:59.172328 containerd[1827]: time="2026-04-21T12:03:59.172088093Z" level=info msg="StartContainer for \"e3c0a5bc219ddee78296150670a0558be0cf5e795192aa2e382765fae19794e6\"" Apr 21 12:03:59.256912 containerd[1827]: time="2026-04-21T12:03:59.256844918Z" level=info msg="StartContainer for \"e3c0a5bc219ddee78296150670a0558be0cf5e795192aa2e382765fae19794e6\" returns successfully" Apr 21 12:03:59.318717 kubelet[3337]: I0421 12:03:59.317423 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-6mm7f" podStartSLOduration=65.317397293 podStartE2EDuration="1m5.317397293s" podCreationTimestamp="2026-04-21 12:02:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 12:03:54.332161522 +0000 UTC m=+66.579075172" watchObservedRunningTime="2026-04-21 12:03:59.317397293 +0000 UTC m=+71.564311043" Apr 21 12:04:00.964963 containerd[1827]: time="2026-04-21T12:04:00.964893403Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:04:00.968176 containerd[1827]: time="2026-04-21T12:04:00.967979473Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 21 12:04:00.971239 containerd[1827]: time="2026-04-21T12:04:00.971172646Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:04:00.976083 containerd[1827]: time="2026-04-21T12:04:00.976044156Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:04:00.976926 containerd[1827]: time="2026-04-21T12:04:00.976758473Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.848946784s" Apr 21 12:04:00.976926 containerd[1827]: time="2026-04-21T12:04:00.976799474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 21 12:04:00.984305 containerd[1827]: time="2026-04-21T12:04:00.984271543Z" level=info msg="CreateContainer within sandbox \"f976b70bd728bcc831cc42d2eb4989e66e7bba1db961e74e8340f329533c09d8\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 21 12:04:01.025929 containerd[1827]: time="2026-04-21T12:04:01.025794786Z" level=info msg="CreateContainer within sandbox \"f976b70bd728bcc831cc42d2eb4989e66e7bba1db961e74e8340f329533c09d8\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"578ca683e4cb1540938d4b797c396fcc43671d9ec069bd7841fa041a20cd62b5\"" Apr 21 12:04:01.027099 containerd[1827]: time="2026-04-21T12:04:01.027025114Z" level=info msg="StartContainer for \"578ca683e4cb1540938d4b797c396fcc43671d9ec069bd7841fa041a20cd62b5\"" Apr 21 12:04:01.107219 containerd[1827]: time="2026-04-21T12:04:01.107164234Z" level=info msg="StartContainer for \"578ca683e4cb1540938d4b797c396fcc43671d9ec069bd7841fa041a20cd62b5\" returns successfully" Apr 21 12:04:01.322519 kubelet[3337]: I0421 12:04:01.320188 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-fc698ddcb-n6cm2" podStartSLOduration=4.140071861 podStartE2EDuration="20.320163271s" podCreationTimestamp="2026-04-21 12:03:41 +0000 UTC" firstStartedPulling="2026-04-21 12:03:42.946649754 +0000 UTC m=+55.193563404" lastFinishedPulling="2026-04-21 12:03:59.126741064 +0000 UTC m=+71.373654814" observedRunningTime="2026-04-21 12:03:59.317924705 +0000 UTC m=+71.564838355" watchObservedRunningTime="2026-04-21 12:04:01.320163271 +0000 UTC m=+73.567076921" Apr 21 12:04:01.967219 kubelet[3337]: I0421 12:04:01.967171 3337 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 21 12:04:01.967219 kubelet[3337]: I0421 12:04:01.967224 3337 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 21 12:04:05.401060 kubelet[3337]: I0421 12:04:05.400908 3337 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 12:04:05.432021 kubelet[3337]: I0421 12:04:05.431925 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-4vbvh" podStartSLOduration=46.11587547 podStartE2EDuration="53.431899899s" podCreationTimestamp="2026-04-21 12:03:12 +0000 UTC" firstStartedPulling="2026-04-21 12:03:53.661854069 +0000 UTC m=+65.908767719" lastFinishedPulling="2026-04-21 12:04:00.977878398 +0000 UTC m=+73.224792148" observedRunningTime="2026-04-21 12:04:01.323250141 +0000 UTC m=+73.570163791" watchObservedRunningTime="2026-04-21 12:04:05.431899899 +0000 UTC m=+77.678813649" Apr 21 12:04:34.190377 kubelet[3337]: I0421 12:04:34.190127 3337 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 12:04:41.192605 systemd[1]: run-containerd-runc-k8s.io-5ef65f6aecec086c20945ea2d309c333a1ab4241841757fcf30bfe679d52bac1-runc.0kYhtZ.mount: Deactivated successfully. Apr 21 12:04:49.662893 containerd[1827]: time="2026-04-21T12:04:49.662410445Z" level=info msg="StopPodSandbox for \"856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d\"" Apr 21 12:04:49.735185 containerd[1827]: 2026-04-21 12:04:49.699 [WARNING][6560] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--6mm7f-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"99e71ca6-4515-43f1-9773-fcdfc5a40507", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 2, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-fffe528a55", ContainerID:"dcf4f45ae299f5d7cb549eec7abeddf80c5ed108887ddfed651ebaa45b50fdf0", Pod:"coredns-674b8bbfcf-6mm7f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3fdd9ab4176", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:04:49.735185 containerd[1827]: 2026-04-21 12:04:49.700 [INFO][6560] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d" Apr 21 12:04:49.735185 containerd[1827]: 2026-04-21 12:04:49.700 [INFO][6560] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d" iface="eth0" netns="" Apr 21 12:04:49.735185 containerd[1827]: 2026-04-21 12:04:49.700 [INFO][6560] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d" Apr 21 12:04:49.735185 containerd[1827]: 2026-04-21 12:04:49.700 [INFO][6560] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d" Apr 21 12:04:49.735185 containerd[1827]: 2026-04-21 12:04:49.722 [INFO][6567] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d" HandleID="k8s-pod-network.856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d" Workload="ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--6mm7f-eth0" Apr 21 12:04:49.735185 containerd[1827]: 2026-04-21 12:04:49.723 [INFO][6567] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:04:49.735185 containerd[1827]: 2026-04-21 12:04:49.723 [INFO][6567] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:04:49.735185 containerd[1827]: 2026-04-21 12:04:49.729 [WARNING][6567] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d" HandleID="k8s-pod-network.856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d" Workload="ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--6mm7f-eth0" Apr 21 12:04:49.735185 containerd[1827]: 2026-04-21 12:04:49.729 [INFO][6567] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d" HandleID="k8s-pod-network.856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d" Workload="ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--6mm7f-eth0" Apr 21 12:04:49.735185 containerd[1827]: 2026-04-21 12:04:49.732 [INFO][6567] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:04:49.735185 containerd[1827]: 2026-04-21 12:04:49.733 [INFO][6560] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d" Apr 21 12:04:49.736529 containerd[1827]: time="2026-04-21T12:04:49.735579606Z" level=info msg="TearDown network for sandbox \"856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d\" successfully" Apr 21 12:04:49.736529 containerd[1827]: time="2026-04-21T12:04:49.735618207Z" level=info msg="StopPodSandbox for \"856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d\" returns successfully" Apr 21 12:04:49.736529 containerd[1827]: time="2026-04-21T12:04:49.736186920Z" level=info msg="RemovePodSandbox for \"856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d\"" Apr 21 12:04:49.736529 containerd[1827]: time="2026-04-21T12:04:49.736228321Z" level=info msg="Forcibly stopping sandbox \"856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d\"" Apr 21 12:04:49.807256 containerd[1827]: 2026-04-21 12:04:49.773 [WARNING][6581] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--6mm7f-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"99e71ca6-4515-43f1-9773-fcdfc5a40507", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 2, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-fffe528a55", ContainerID:"dcf4f45ae299f5d7cb549eec7abeddf80c5ed108887ddfed651ebaa45b50fdf0", Pod:"coredns-674b8bbfcf-6mm7f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3fdd9ab4176", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:04:49.807256 containerd[1827]: 2026-04-21 12:04:49.773 [INFO][6581] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d" Apr 21 12:04:49.807256 containerd[1827]: 2026-04-21 12:04:49.773 [INFO][6581] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d" iface="eth0" netns="" Apr 21 12:04:49.807256 containerd[1827]: 2026-04-21 12:04:49.773 [INFO][6581] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d" Apr 21 12:04:49.807256 containerd[1827]: 2026-04-21 12:04:49.773 [INFO][6581] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d" Apr 21 12:04:49.807256 containerd[1827]: 2026-04-21 12:04:49.796 [INFO][6588] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d" HandleID="k8s-pod-network.856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d" Workload="ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--6mm7f-eth0" Apr 21 12:04:49.807256 containerd[1827]: 2026-04-21 12:04:49.797 [INFO][6588] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:04:49.807256 containerd[1827]: 2026-04-21 12:04:49.797 [INFO][6588] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:04:49.807256 containerd[1827]: 2026-04-21 12:04:49.802 [WARNING][6588] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d" HandleID="k8s-pod-network.856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d" Workload="ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--6mm7f-eth0" Apr 21 12:04:49.807256 containerd[1827]: 2026-04-21 12:04:49.803 [INFO][6588] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d" HandleID="k8s-pod-network.856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d" Workload="ci--4081.3.7--a--fffe528a55-k8s-coredns--674b8bbfcf--6mm7f-eth0" Apr 21 12:04:49.807256 containerd[1827]: 2026-04-21 12:04:49.804 [INFO][6588] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:04:49.807256 containerd[1827]: 2026-04-21 12:04:49.805 [INFO][6581] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d" Apr 21 12:04:49.807256 containerd[1827]: time="2026-04-21T12:04:49.807247933Z" level=info msg="TearDown network for sandbox \"856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d\" successfully" Apr 21 12:04:49.818686 containerd[1827]: time="2026-04-21T12:04:49.818628391Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 12:04:49.818867 containerd[1827]: time="2026-04-21T12:04:49.818723993Z" level=info msg="RemovePodSandbox \"856ad6fe0b30d6d9d30332ab0944c74a3cce6b1be4d92e97631f38e80a0f4c4d\" returns successfully" Apr 21 12:04:49.819366 containerd[1827]: time="2026-04-21T12:04:49.819325307Z" level=info msg="StopPodSandbox for \"60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96\"" Apr 21 12:04:49.895347 containerd[1827]: 2026-04-21 12:04:49.856 [WARNING][6602] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--fffe528a55-k8s-csi--node--driver--4vbvh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"63866721-d13a-44fd-83f4-80f75344c988", ResourceVersion:"1117", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-fffe528a55", ContainerID:"f976b70bd728bcc831cc42d2eb4989e66e7bba1db961e74e8340f329533c09d8", Pod:"csi-node-driver-4vbvh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.50.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8f5fea27050", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:04:49.895347 containerd[1827]: 2026-04-21 12:04:49.856 [INFO][6602] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96" Apr 21 12:04:49.895347 containerd[1827]: 2026-04-21 12:04:49.856 [INFO][6602] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96" iface="eth0" netns="" Apr 21 12:04:49.895347 containerd[1827]: 2026-04-21 12:04:49.856 [INFO][6602] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96" Apr 21 12:04:49.895347 containerd[1827]: 2026-04-21 12:04:49.856 [INFO][6602] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96" Apr 21 12:04:49.895347 containerd[1827]: 2026-04-21 12:04:49.884 [INFO][6609] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96" HandleID="k8s-pod-network.60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96" Workload="ci--4081.3.7--a--fffe528a55-k8s-csi--node--driver--4vbvh-eth0" Apr 21 12:04:49.895347 containerd[1827]: 2026-04-21 12:04:49.884 [INFO][6609] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:04:49.895347 containerd[1827]: 2026-04-21 12:04:49.884 [INFO][6609] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:04:49.895347 containerd[1827]: 2026-04-21 12:04:49.891 [WARNING][6609] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96" HandleID="k8s-pod-network.60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96" Workload="ci--4081.3.7--a--fffe528a55-k8s-csi--node--driver--4vbvh-eth0" Apr 21 12:04:49.895347 containerd[1827]: 2026-04-21 12:04:49.891 [INFO][6609] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96" HandleID="k8s-pod-network.60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96" Workload="ci--4081.3.7--a--fffe528a55-k8s-csi--node--driver--4vbvh-eth0" Apr 21 12:04:49.895347 containerd[1827]: 2026-04-21 12:04:49.892 [INFO][6609] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:04:49.895347 containerd[1827]: 2026-04-21 12:04:49.894 [INFO][6602] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96" Apr 21 12:04:49.896020 containerd[1827]: time="2026-04-21T12:04:49.895413833Z" level=info msg="TearDown network for sandbox \"60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96\" successfully" Apr 21 12:04:49.896020 containerd[1827]: time="2026-04-21T12:04:49.895452334Z" level=info msg="StopPodSandbox for \"60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96\" returns successfully" Apr 21 12:04:49.896102 containerd[1827]: time="2026-04-21T12:04:49.896041148Z" level=info msg="RemovePodSandbox for \"60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96\"" Apr 21 12:04:49.896102 containerd[1827]: time="2026-04-21T12:04:49.896077248Z" level=info msg="Forcibly stopping sandbox \"60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96\"" Apr 21 12:04:49.969229 containerd[1827]: 2026-04-21 12:04:49.934 [WARNING][6624] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--fffe528a55-k8s-csi--node--driver--4vbvh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"63866721-d13a-44fd-83f4-80f75344c988", ResourceVersion:"1117", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-fffe528a55", ContainerID:"f976b70bd728bcc831cc42d2eb4989e66e7bba1db961e74e8340f329533c09d8", Pod:"csi-node-driver-4vbvh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.50.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8f5fea27050", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:04:49.969229 containerd[1827]: 2026-04-21 12:04:49.934 [INFO][6624] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96" Apr 21 12:04:49.969229 containerd[1827]: 2026-04-21 12:04:49.934 [INFO][6624] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96" iface="eth0" netns="" Apr 21 12:04:49.969229 containerd[1827]: 2026-04-21 12:04:49.934 [INFO][6624] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96" Apr 21 12:04:49.969229 containerd[1827]: 2026-04-21 12:04:49.934 [INFO][6624] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96" Apr 21 12:04:49.969229 containerd[1827]: 2026-04-21 12:04:49.957 [INFO][6631] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96" HandleID="k8s-pod-network.60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96" Workload="ci--4081.3.7--a--fffe528a55-k8s-csi--node--driver--4vbvh-eth0" Apr 21 12:04:49.969229 containerd[1827]: 2026-04-21 12:04:49.957 [INFO][6631] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:04:49.969229 containerd[1827]: 2026-04-21 12:04:49.957 [INFO][6631] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:04:49.969229 containerd[1827]: 2026-04-21 12:04:49.964 [WARNING][6631] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96" HandleID="k8s-pod-network.60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96" Workload="ci--4081.3.7--a--fffe528a55-k8s-csi--node--driver--4vbvh-eth0" Apr 21 12:04:49.969229 containerd[1827]: 2026-04-21 12:04:49.965 [INFO][6631] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96" HandleID="k8s-pod-network.60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96" Workload="ci--4081.3.7--a--fffe528a55-k8s-csi--node--driver--4vbvh-eth0" Apr 21 12:04:49.969229 containerd[1827]: 2026-04-21 12:04:49.966 [INFO][6631] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:04:49.969229 containerd[1827]: 2026-04-21 12:04:49.967 [INFO][6624] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96" Apr 21 12:04:49.969229 containerd[1827]: time="2026-04-21T12:04:49.969160107Z" level=info msg="TearDown network for sandbox \"60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96\" successfully" Apr 21 12:04:49.981524 containerd[1827]: time="2026-04-21T12:04:49.981440186Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 12:04:49.981690 containerd[1827]: time="2026-04-21T12:04:49.981558588Z" level=info msg="RemovePodSandbox \"60067fba17e07613af570d2791b2149522dcda15a030d036fdd2f0a56b051b96\" returns successfully" Apr 21 12:04:50.295164 systemd[1]: run-containerd-runc-k8s.io-0b60b0bf2a51d3b1cb24641469f9856c43ec0091edff942f402cf51d5541c601-runc.Sx47UB.mount: Deactivated successfully. Apr 21 12:05:01.433110 systemd[1]: Started sshd@7-10.0.0.5:22-20.229.252.112:53668.service - OpenSSH per-connection server daemon (20.229.252.112:53668). Apr 21 12:05:01.575308 sshd[6685]: Accepted publickey for core from 20.229.252.112 port 53668 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:05:01.578022 sshd[6685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:05:01.588963 systemd-logind[1793]: New session 10 of user core. Apr 21 12:05:01.592191 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 21 12:05:01.840666 sshd[6685]: pam_unix(sshd:session): session closed for user core Apr 21 12:05:01.851138 systemd[1]: sshd@7-10.0.0.5:22-20.229.252.112:53668.service: Deactivated successfully. Apr 21 12:05:01.859468 systemd[1]: session-10.scope: Deactivated successfully. Apr 21 12:05:01.861519 systemd-logind[1793]: Session 10 logged out. Waiting for processes to exit. Apr 21 12:05:01.864585 systemd-logind[1793]: Removed session 10. Apr 21 12:05:02.866300 systemd[1]: run-containerd-runc-k8s.io-39d57e20c5dea5849bc44cee43157e9312eaa9813c7909aef4fae85f1bafaab4-runc.0qYQYY.mount: Deactivated successfully. Apr 21 12:05:06.861249 systemd[1]: Started sshd@8-10.0.0.5:22-20.229.252.112:37098.service - OpenSSH per-connection server daemon (20.229.252.112:37098). Apr 21 12:05:06.976959 sshd[6730]: Accepted publickey for core from 20.229.252.112 port 37098 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:05:06.978743 sshd[6730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:05:06.985373 systemd-logind[1793]: New session 11 of user core. Apr 21 12:05:06.991087 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 21 12:05:07.152931 sshd[6730]: pam_unix(sshd:session): session closed for user core Apr 21 12:05:07.156872 systemd[1]: sshd@8-10.0.0.5:22-20.229.252.112:37098.service: Deactivated successfully. Apr 21 12:05:07.162993 systemd[1]: session-11.scope: Deactivated successfully. Apr 21 12:05:07.163900 systemd-logind[1793]: Session 11 logged out. Waiting for processes to exit. Apr 21 12:05:07.165956 systemd-logind[1793]: Removed session 11. Apr 21 12:05:12.176282 systemd[1]: Started sshd@9-10.0.0.5:22-20.229.252.112:37106.service - OpenSSH per-connection server daemon (20.229.252.112:37106). Apr 21 12:05:12.295954 sshd[6765]: Accepted publickey for core from 20.229.252.112 port 37106 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:05:12.297656 sshd[6765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:05:12.303112 systemd-logind[1793]: New session 12 of user core. Apr 21 12:05:12.307824 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 21 12:05:12.472773 sshd[6765]: pam_unix(sshd:session): session closed for user core Apr 21 12:05:12.479522 systemd[1]: sshd@9-10.0.0.5:22-20.229.252.112:37106.service: Deactivated successfully. Apr 21 12:05:12.484101 systemd[1]: session-12.scope: Deactivated successfully. Apr 21 12:05:12.485006 systemd-logind[1793]: Session 12 logged out. Waiting for processes to exit. Apr 21 12:05:12.486024 systemd-logind[1793]: Removed session 12. Apr 21 12:05:17.497853 systemd[1]: Started sshd@10-10.0.0.5:22-20.229.252.112:35872.service - OpenSSH per-connection server daemon (20.229.252.112:35872). Apr 21 12:05:17.612373 sshd[6791]: Accepted publickey for core from 20.229.252.112 port 35872 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:05:17.614108 sshd[6791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:05:17.618555 systemd-logind[1793]: New session 13 of user core. Apr 21 12:05:17.623761 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 21 12:05:17.783564 sshd[6791]: pam_unix(sshd:session): session closed for user core Apr 21 12:05:17.787724 systemd[1]: sshd@10-10.0.0.5:22-20.229.252.112:35872.service: Deactivated successfully. Apr 21 12:05:17.793727 systemd[1]: session-13.scope: Deactivated successfully. Apr 21 12:05:17.794963 systemd-logind[1793]: Session 13 logged out. Waiting for processes to exit. Apr 21 12:05:17.796718 systemd-logind[1793]: Removed session 13. Apr 21 12:05:22.808871 systemd[1]: Started sshd@11-10.0.0.5:22-20.229.252.112:35874.service - OpenSSH per-connection server daemon (20.229.252.112:35874). Apr 21 12:05:22.923433 sshd[6855]: Accepted publickey for core from 20.229.252.112 port 35874 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:05:22.925047 sshd[6855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:05:22.929563 systemd-logind[1793]: New session 14 of user core. Apr 21 12:05:22.936808 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 21 12:05:23.104239 sshd[6855]: pam_unix(sshd:session): session closed for user core Apr 21 12:05:23.109928 systemd[1]: sshd@11-10.0.0.5:22-20.229.252.112:35874.service: Deactivated successfully. Apr 21 12:05:23.110428 systemd-logind[1793]: Session 14 logged out. Waiting for processes to exit. Apr 21 12:05:23.115048 systemd[1]: session-14.scope: Deactivated successfully. Apr 21 12:05:23.117195 systemd-logind[1793]: Removed session 14. Apr 21 12:05:28.129189 systemd[1]: Started sshd@12-10.0.0.5:22-20.229.252.112:46170.service - OpenSSH per-connection server daemon (20.229.252.112:46170). Apr 21 12:05:28.242073 sshd[6892]: Accepted publickey for core from 20.229.252.112 port 46170 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:05:28.243711 sshd[6892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:05:28.248999 systemd-logind[1793]: New session 15 of user core. Apr 21 12:05:28.258838 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 21 12:05:28.415869 sshd[6892]: pam_unix(sshd:session): session closed for user core Apr 21 12:05:28.421600 systemd[1]: sshd@12-10.0.0.5:22-20.229.252.112:46170.service: Deactivated successfully. Apr 21 12:05:28.426552 systemd[1]: session-15.scope: Deactivated successfully. Apr 21 12:05:28.427522 systemd-logind[1793]: Session 15 logged out. Waiting for processes to exit. Apr 21 12:05:28.428681 systemd-logind[1793]: Removed session 15. Apr 21 12:05:33.440865 systemd[1]: Started sshd@13-10.0.0.5:22-20.229.252.112:46172.service - OpenSSH per-connection server daemon (20.229.252.112:46172). Apr 21 12:05:33.555484 sshd[6927]: Accepted publickey for core from 20.229.252.112 port 46172 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:05:33.557213 sshd[6927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:05:33.566476 systemd-logind[1793]: New session 16 of user core. Apr 21 12:05:33.571425 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 21 12:05:33.733730 sshd[6927]: pam_unix(sshd:session): session closed for user core Apr 21 12:05:33.739905 systemd[1]: sshd@13-10.0.0.5:22-20.229.252.112:46172.service: Deactivated successfully. Apr 21 12:05:33.743987 systemd[1]: session-16.scope: Deactivated successfully. Apr 21 12:05:33.745139 systemd-logind[1793]: Session 16 logged out. Waiting for processes to exit. Apr 21 12:05:33.746317 systemd-logind[1793]: Removed session 16. Apr 21 12:05:33.757064 systemd[1]: Started sshd@14-10.0.0.5:22-20.229.252.112:46180.service - OpenSSH per-connection server daemon (20.229.252.112:46180). Apr 21 12:05:33.872418 sshd[6941]: Accepted publickey for core from 20.229.252.112 port 46180 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:05:33.874065 sshd[6941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:05:33.879128 systemd-logind[1793]: New session 17 of user core. Apr 21 12:05:33.884816 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 21 12:05:34.094076 sshd[6941]: pam_unix(sshd:session): session closed for user core Apr 21 12:05:34.097816 systemd[1]: sshd@14-10.0.0.5:22-20.229.252.112:46180.service: Deactivated successfully. Apr 21 12:05:34.110719 systemd-logind[1793]: Session 17 logged out. Waiting for processes to exit. Apr 21 12:05:34.111733 systemd[1]: session-17.scope: Deactivated successfully. Apr 21 12:05:34.125328 systemd[1]: Started sshd@15-10.0.0.5:22-20.229.252.112:46190.service - OpenSSH per-connection server daemon (20.229.252.112:46190). Apr 21 12:05:34.126337 systemd-logind[1793]: Removed session 17. Apr 21 12:05:34.244630 sshd[6953]: Accepted publickey for core from 20.229.252.112 port 46190 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:05:34.246331 sshd[6953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:05:34.251578 systemd-logind[1793]: New session 18 of user core. Apr 21 12:05:34.258034 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 21 12:05:34.425884 sshd[6953]: pam_unix(sshd:session): session closed for user core Apr 21 12:05:34.430656 systemd[1]: sshd@15-10.0.0.5:22-20.229.252.112:46190.service: Deactivated successfully. Apr 21 12:05:34.435646 systemd-logind[1793]: Session 18 logged out. Waiting for processes to exit. Apr 21 12:05:34.436353 systemd[1]: session-18.scope: Deactivated successfully. Apr 21 12:05:34.438871 systemd-logind[1793]: Removed session 18. Apr 21 12:05:39.448862 systemd[1]: Started sshd@16-10.0.0.5:22-20.229.252.112:33544.service - OpenSSH per-connection server daemon (20.229.252.112:33544). Apr 21 12:05:39.569365 sshd[6967]: Accepted publickey for core from 20.229.252.112 port 33544 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:05:39.571208 sshd[6967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:05:39.576288 systemd-logind[1793]: New session 19 of user core. Apr 21 12:05:39.581772 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 21 12:05:39.745457 sshd[6967]: pam_unix(sshd:session): session closed for user core Apr 21 12:05:39.749651 systemd[1]: sshd@16-10.0.0.5:22-20.229.252.112:33544.service: Deactivated successfully. Apr 21 12:05:39.755752 systemd-logind[1793]: Session 19 logged out. Waiting for processes to exit. Apr 21 12:05:39.756012 systemd[1]: session-19.scope: Deactivated successfully. Apr 21 12:05:39.757768 systemd-logind[1793]: Removed session 19. Apr 21 12:05:39.767788 systemd[1]: Started sshd@17-10.0.0.5:22-20.229.252.112:33558.service - OpenSSH per-connection server daemon (20.229.252.112:33558). Apr 21 12:05:39.881839 sshd[6981]: Accepted publickey for core from 20.229.252.112 port 33558 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:05:39.883394 sshd[6981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:05:39.888366 systemd-logind[1793]: New session 20 of user core. Apr 21 12:05:39.893782 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 21 12:05:40.129878 sshd[6981]: pam_unix(sshd:session): session closed for user core Apr 21 12:05:40.135233 systemd[1]: sshd@17-10.0.0.5:22-20.229.252.112:33558.service: Deactivated successfully. Apr 21 12:05:40.139407 systemd[1]: session-20.scope: Deactivated successfully. Apr 21 12:05:40.140657 systemd-logind[1793]: Session 20 logged out. Waiting for processes to exit. Apr 21 12:05:40.141721 systemd-logind[1793]: Removed session 20. Apr 21 12:05:40.150785 systemd[1]: Started sshd@18-10.0.0.5:22-20.229.252.112:33570.service - OpenSSH per-connection server daemon (20.229.252.112:33570). Apr 21 12:05:40.274231 sshd[6993]: Accepted publickey for core from 20.229.252.112 port 33570 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:05:40.275924 sshd[6993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:05:40.280595 systemd-logind[1793]: New session 21 of user core. Apr 21 12:05:40.284806 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 21 12:05:41.013126 sshd[6993]: pam_unix(sshd:session): session closed for user core Apr 21 12:05:41.019855 systemd-logind[1793]: Session 21 logged out. Waiting for processes to exit. Apr 21 12:05:41.021118 systemd[1]: sshd@18-10.0.0.5:22-20.229.252.112:33570.service: Deactivated successfully. Apr 21 12:05:41.033126 systemd[1]: session-21.scope: Deactivated successfully. Apr 21 12:05:41.049896 systemd[1]: Started sshd@19-10.0.0.5:22-20.229.252.112:33572.service - OpenSSH per-connection server daemon (20.229.252.112:33572). Apr 21 12:05:41.052804 systemd-logind[1793]: Removed session 21. Apr 21 12:05:41.208518 sshd[7021]: Accepted publickey for core from 20.229.252.112 port 33572 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:05:41.211603 sshd[7021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:05:41.220566 systemd-logind[1793]: New session 22 of user core. Apr 21 12:05:41.224662 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 21 12:05:41.519804 sshd[7021]: pam_unix(sshd:session): session closed for user core Apr 21 12:05:41.528126 systemd[1]: sshd@19-10.0.0.5:22-20.229.252.112:33572.service: Deactivated successfully. Apr 21 12:05:41.531670 systemd-logind[1793]: Session 22 logged out. Waiting for processes to exit. Apr 21 12:05:41.534988 systemd[1]: session-22.scope: Deactivated successfully. Apr 21 12:05:41.537064 systemd-logind[1793]: Removed session 22. Apr 21 12:05:41.544793 systemd[1]: Started sshd@20-10.0.0.5:22-20.229.252.112:33588.service - OpenSSH per-connection server daemon (20.229.252.112:33588). Apr 21 12:05:41.670077 sshd[7072]: Accepted publickey for core from 20.229.252.112 port 33588 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:05:41.671708 sshd[7072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:05:41.676630 systemd-logind[1793]: New session 23 of user core. Apr 21 12:05:41.680099 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 21 12:05:41.842842 sshd[7072]: pam_unix(sshd:session): session closed for user core Apr 21 12:05:41.848151 systemd[1]: sshd@20-10.0.0.5:22-20.229.252.112:33588.service: Deactivated successfully. Apr 21 12:05:41.848727 systemd-logind[1793]: Session 23 logged out. Waiting for processes to exit. Apr 21 12:05:41.853664 systemd[1]: session-23.scope: Deactivated successfully. Apr 21 12:05:41.854822 systemd-logind[1793]: Removed session 23. Apr 21 12:05:46.869860 systemd[1]: Started sshd@21-10.0.0.5:22-20.229.252.112:60764.service - OpenSSH per-connection server daemon (20.229.252.112:60764). Apr 21 12:05:46.987456 sshd[7086]: Accepted publickey for core from 20.229.252.112 port 60764 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:05:46.989990 sshd[7086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:05:46.995116 systemd-logind[1793]: New session 24 of user core. Apr 21 12:05:46.999864 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 21 12:05:47.163647 sshd[7086]: pam_unix(sshd:session): session closed for user core Apr 21 12:05:47.167374 systemd[1]: sshd@21-10.0.0.5:22-20.229.252.112:60764.service: Deactivated successfully. Apr 21 12:05:47.174264 systemd[1]: session-24.scope: Deactivated successfully. Apr 21 12:05:47.175329 systemd-logind[1793]: Session 24 logged out. Waiting for processes to exit. Apr 21 12:05:47.176391 systemd-logind[1793]: Removed session 24. Apr 21 12:05:52.194814 systemd[1]: Started sshd@22-10.0.0.5:22-20.229.252.112:60770.service - OpenSSH per-connection server daemon (20.229.252.112:60770). Apr 21 12:05:52.313297 sshd[7122]: Accepted publickey for core from 20.229.252.112 port 60770 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:05:52.314029 sshd[7122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:05:52.319996 systemd-logind[1793]: New session 25 of user core. Apr 21 12:05:52.325941 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 21 12:05:52.487428 sshd[7122]: pam_unix(sshd:session): session closed for user core Apr 21 12:05:52.492361 systemd[1]: sshd@22-10.0.0.5:22-20.229.252.112:60770.service: Deactivated successfully. Apr 21 12:05:52.497272 systemd[1]: session-25.scope: Deactivated successfully. Apr 21 12:05:52.498137 systemd-logind[1793]: Session 25 logged out. Waiting for processes to exit. Apr 21 12:05:52.499222 systemd-logind[1793]: Removed session 25. Apr 21 12:05:56.312307 systemd[1]: run-containerd-runc-k8s.io-39d57e20c5dea5849bc44cee43157e9312eaa9813c7909aef4fae85f1bafaab4-runc.DOghSL.mount: Deactivated successfully. Apr 21 12:05:57.512840 systemd[1]: Started sshd@23-10.0.0.5:22-20.229.252.112:52998.service - OpenSSH per-connection server daemon (20.229.252.112:52998). Apr 21 12:05:57.630095 sshd[7157]: Accepted publickey for core from 20.229.252.112 port 52998 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:05:57.631818 sshd[7157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:05:57.636556 systemd-logind[1793]: New session 26 of user core. Apr 21 12:05:57.640787 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 21 12:05:57.801406 sshd[7157]: pam_unix(sshd:session): session closed for user core Apr 21 12:05:57.804838 systemd[1]: sshd@23-10.0.0.5:22-20.229.252.112:52998.service: Deactivated successfully. Apr 21 12:05:57.810849 systemd-logind[1793]: Session 26 logged out. Waiting for processes to exit. Apr 21 12:05:57.812101 systemd[1]: session-26.scope: Deactivated successfully. Apr 21 12:05:57.813323 systemd-logind[1793]: Removed session 26. Apr 21 12:06:02.830869 systemd[1]: Started sshd@24-10.0.0.5:22-20.229.252.112:53012.service - OpenSSH per-connection server daemon (20.229.252.112:53012). Apr 21 12:06:02.967730 sshd[7171]: Accepted publickey for core from 20.229.252.112 port 53012 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:06:02.968436 sshd[7171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:06:02.975950 systemd-logind[1793]: New session 27 of user core. Apr 21 12:06:02.981834 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 21 12:06:03.144431 sshd[7171]: pam_unix(sshd:session): session closed for user core Apr 21 12:06:03.148757 systemd[1]: sshd@24-10.0.0.5:22-20.229.252.112:53012.service: Deactivated successfully. Apr 21 12:06:03.153459 systemd-logind[1793]: Session 27 logged out. Waiting for processes to exit. Apr 21 12:06:03.155112 systemd[1]: session-27.scope: Deactivated successfully. Apr 21 12:06:03.156397 systemd-logind[1793]: Removed session 27. Apr 21 12:06:08.168250 systemd[1]: Started sshd@25-10.0.0.5:22-20.229.252.112:42540.service - OpenSSH per-connection server daemon (20.229.252.112:42540). Apr 21 12:06:08.293579 sshd[7206]: Accepted publickey for core from 20.229.252.112 port 42540 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:06:08.294258 sshd[7206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:06:08.299026 systemd-logind[1793]: New session 28 of user core. Apr 21 12:06:08.305846 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 21 12:06:08.467704 sshd[7206]: pam_unix(sshd:session): session closed for user core Apr 21 12:06:08.473687 systemd[1]: sshd@25-10.0.0.5:22-20.229.252.112:42540.service: Deactivated successfully. Apr 21 12:06:08.477807 systemd[1]: session-28.scope: Deactivated successfully. Apr 21 12:06:08.478942 systemd-logind[1793]: Session 28 logged out. Waiting for processes to exit. Apr 21 12:06:08.480008 systemd-logind[1793]: Removed session 28. Apr 21 12:06:13.492878 systemd[1]: Started sshd@26-10.0.0.5:22-20.229.252.112:42542.service - OpenSSH per-connection server daemon (20.229.252.112:42542). Apr 21 12:06:13.608536 sshd[7243]: Accepted publickey for core from 20.229.252.112 port 42542 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:06:13.610196 sshd[7243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:06:13.614945 systemd-logind[1793]: New session 29 of user core. Apr 21 12:06:13.618756 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 21 12:06:13.781945 sshd[7243]: pam_unix(sshd:session): session closed for user core Apr 21 12:06:13.789040 systemd[1]: sshd@26-10.0.0.5:22-20.229.252.112:42542.service: Deactivated successfully. Apr 21 12:06:13.792394 systemd[1]: session-29.scope: Deactivated successfully. Apr 21 12:06:13.793839 systemd-logind[1793]: Session 29 logged out. Waiting for processes to exit. Apr 21 12:06:13.794964 systemd-logind[1793]: Removed session 29.