Apr 17 23:37:31.120559 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Apr 17 22:11:20 -00 2026 Apr 17 23:37:31.120591 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:37:31.120619 kernel: BIOS-provided physical RAM map: Apr 17 23:37:31.120629 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 17 23:37:31.120639 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Apr 17 23:37:31.120650 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000000437dfff] usable Apr 17 23:37:31.120663 kernel: BIOS-e820: [mem 0x000000000437e000-0x000000000477dfff] reserved Apr 17 23:37:31.120673 kernel: BIOS-e820: [mem 0x000000000477e000-0x000000003ff1efff] usable Apr 17 23:37:31.120686 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ff73fff] type 20 Apr 17 23:37:31.120697 kernel: BIOS-e820: [mem 0x000000003ff74000-0x000000003ffc8fff] reserved Apr 17 23:37:31.120708 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Apr 17 23:37:31.120719 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Apr 17 23:37:31.120730 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Apr 17 23:37:31.120741 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Apr 17 23:37:31.120759 kernel: printk: bootconsole [earlyser0] enabled Apr 17 23:37:31.120770 kernel: NX (Execute Disable) protection: active Apr 17 23:37:31.120783 kernel: APIC: Static calls initialized Apr 17 23:37:31.120794 kernel: efi: EFI v2.7 by Microsoft Apr 17 23:37:31.120805 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3f420418 Apr 17 23:37:31.120817 kernel: SMBIOS 3.1.0 present. Apr 17 23:37:31.120829 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/08/2026 Apr 17 23:37:31.120841 kernel: Hypervisor detected: Microsoft Hyper-V Apr 17 23:37:31.120853 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Apr 17 23:37:31.120864 kernel: Hyper-V: Host Build 10.0.26102.1277-1-0 Apr 17 23:37:31.120875 kernel: Hyper-V: Nested features: 0x1e0101 Apr 17 23:37:31.120893 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Apr 17 23:37:31.120903 kernel: Hyper-V: Using hypercall for remote TLB flush Apr 17 23:37:31.120916 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Apr 17 23:37:31.120928 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Apr 17 23:37:31.120940 kernel: tsc: Marking TSC unstable due to running on Hyper-V Apr 17 23:37:31.120953 kernel: tsc: Detected 2593.906 MHz processor Apr 17 23:37:31.120966 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 17 23:37:31.120977 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 17 23:37:31.120990 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Apr 17 23:37:31.121007 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 17 23:37:31.121020 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 17 23:37:31.121033 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Apr 17 23:37:31.121044 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Apr 17 23:37:31.121054 kernel: Using GB pages for direct mapping Apr 17 23:37:31.121066 kernel: Secure boot disabled Apr 17 23:37:31.121085 kernel: ACPI: Early table checksum verification disabled Apr 17 23:37:31.121100 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Apr 17 23:37:31.121115 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 17 23:37:31.121129 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 17 23:37:31.121144 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Apr 17 23:37:31.121157 kernel: ACPI: FACS 0x000000003FFFE000 000040 Apr 17 23:37:31.121169 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 17 23:37:31.121181 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 17 23:37:31.121196 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 17 23:37:31.121208 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 17 23:37:31.121221 kernel: ACPI: SRAT 0x000000003FFD4000 0001E0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 17 23:37:31.121235 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 17 23:37:31.121249 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Apr 17 23:37:31.121262 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Apr 17 23:37:31.121275 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Apr 17 23:37:31.121288 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Apr 17 23:37:31.121302 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Apr 17 23:37:31.121319 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Apr 17 23:37:31.121332 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Apr 17 23:37:31.121346 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd41df] Apr 17 23:37:31.121360 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Apr 17 23:37:31.121373 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 17 23:37:31.121387 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 17 23:37:31.121400 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Apr 17 23:37:31.121414 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Apr 17 23:37:31.121427 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Apr 17 23:37:31.121443 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Apr 17 23:37:31.121457 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Apr 17 23:37:31.121471 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Apr 17 23:37:31.121485 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Apr 17 23:37:31.121498 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Apr 17 23:37:31.121512 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Apr 17 23:37:31.121525 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Apr 17 23:37:31.121539 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Apr 17 23:37:31.121555 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Apr 17 23:37:31.121569 kernel: Zone ranges: Apr 17 23:37:31.121583 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 17 23:37:31.121596 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 17 23:37:31.121746 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Apr 17 23:37:31.121760 kernel: Movable zone start for each node Apr 17 23:37:31.121773 kernel: Early memory node ranges Apr 17 23:37:31.121786 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 17 23:37:31.121798 kernel: node 0: [mem 0x0000000000100000-0x000000000437dfff] Apr 17 23:37:31.121829 kernel: node 0: [mem 0x000000000477e000-0x000000003ff1efff] Apr 17 23:37:31.121855 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Apr 17 23:37:31.121876 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Apr 17 23:37:31.121886 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Apr 17 23:37:31.121899 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 17 23:37:31.121912 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 17 23:37:31.121925 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Apr 17 23:37:31.121938 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Apr 17 23:37:31.121950 kernel: ACPI: PM-Timer IO Port: 0x408 Apr 17 23:37:31.121964 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Apr 17 23:37:31.121977 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Apr 17 23:37:31.121991 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 17 23:37:31.122005 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 17 23:37:31.122018 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Apr 17 23:37:31.122029 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 17 23:37:31.122041 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Apr 17 23:37:31.122054 kernel: Booting paravirtualized kernel on Hyper-V Apr 17 23:37:31.122066 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 17 23:37:31.122089 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 17 23:37:31.122105 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 17 23:37:31.122116 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 17 23:37:31.122127 kernel: pcpu-alloc: [0] 0 1 Apr 17 23:37:31.122139 kernel: Hyper-V: PV spinlocks enabled Apr 17 23:37:31.122152 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 17 23:37:31.122166 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:37:31.122178 kernel: random: crng init done Apr 17 23:37:31.122194 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Apr 17 23:37:31.122206 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 17 23:37:31.122219 kernel: Fallback order for Node 0: 0 Apr 17 23:37:31.122232 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2061321 Apr 17 23:37:31.122244 kernel: Policy zone: Normal Apr 17 23:37:31.122257 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 17 23:37:31.122270 kernel: software IO TLB: area num 2. Apr 17 23:37:31.122285 kernel: Memory: 8066036K/8383228K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 316932K reserved, 0K cma-reserved) Apr 17 23:37:31.122299 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 17 23:37:31.122327 kernel: ftrace: allocating 37996 entries in 149 pages Apr 17 23:37:31.122342 kernel: ftrace: allocated 149 pages with 4 groups Apr 17 23:37:31.122356 kernel: Dynamic Preempt: voluntary Apr 17 23:37:31.122374 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 17 23:37:31.122394 kernel: rcu: RCU event tracing is enabled. Apr 17 23:37:31.122409 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 17 23:37:31.122424 kernel: Trampoline variant of Tasks RCU enabled. Apr 17 23:37:31.122439 kernel: Rude variant of Tasks RCU enabled. Apr 17 23:37:31.122455 kernel: Tracing variant of Tasks RCU enabled. Apr 17 23:37:31.122472 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 17 23:37:31.122488 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 17 23:37:31.122503 kernel: Using NULL legacy PIC Apr 17 23:37:31.122518 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Apr 17 23:37:31.122533 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 17 23:37:31.122548 kernel: Console: colour dummy device 80x25 Apr 17 23:37:31.122563 kernel: printk: console [tty1] enabled Apr 17 23:37:31.122578 kernel: printk: console [ttyS0] enabled Apr 17 23:37:31.122596 kernel: printk: bootconsole [earlyser0] disabled Apr 17 23:37:31.122621 kernel: ACPI: Core revision 20230628 Apr 17 23:37:31.122633 kernel: Failed to register legacy timer interrupt Apr 17 23:37:31.124641 kernel: APIC: Switch to symmetric I/O mode setup Apr 17 23:37:31.124660 kernel: Hyper-V: enabling crash_kexec_post_notifiers Apr 17 23:37:31.124676 kernel: Hyper-V: Using IPI hypercalls Apr 17 23:37:31.124690 kernel: APIC: send_IPI() replaced with hv_send_ipi() Apr 17 23:37:31.124704 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Apr 17 23:37:31.124719 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Apr 17 23:37:31.124738 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Apr 17 23:37:31.124753 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Apr 17 23:37:31.124767 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Apr 17 23:37:31.124781 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Apr 17 23:37:31.124795 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 17 23:37:31.124809 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 17 23:37:31.124823 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 17 23:37:31.124837 kernel: Spectre V2 : Mitigation: Retpolines Apr 17 23:37:31.124858 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 17 23:37:31.124872 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 17 23:37:31.124890 kernel: RETBleed: Vulnerable Apr 17 23:37:31.124905 kernel: Speculative Store Bypass: Vulnerable Apr 17 23:37:31.124923 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Apr 17 23:37:31.124936 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 17 23:37:31.124951 kernel: active return thunk: its_return_thunk Apr 17 23:37:31.124965 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 17 23:37:31.124980 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 17 23:37:31.124995 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 17 23:37:31.125010 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 17 23:37:31.125025 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 17 23:37:31.125044 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 17 23:37:31.125059 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 17 23:37:31.125074 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 17 23:37:31.125088 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 17 23:37:31.125103 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 17 23:37:31.125118 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 17 23:37:31.125133 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 17 23:37:31.125148 kernel: Freeing SMP alternatives memory: 32K Apr 17 23:37:31.125163 kernel: pid_max: default: 32768 minimum: 301 Apr 17 23:37:31.125178 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 17 23:37:31.125192 kernel: landlock: Up and running. Apr 17 23:37:31.125207 kernel: SELinux: Initializing. Apr 17 23:37:31.125225 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 17 23:37:31.125240 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 17 23:37:31.125255 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Apr 17 23:37:31.125270 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:37:31.125286 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:37:31.125301 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:37:31.125316 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Apr 17 23:37:31.125331 kernel: signal: max sigframe size: 3632 Apr 17 23:37:31.125346 kernel: rcu: Hierarchical SRCU implementation. Apr 17 23:37:31.125365 kernel: rcu: Max phase no-delay instances is 400. Apr 17 23:37:31.125380 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 17 23:37:31.125395 kernel: smp: Bringing up secondary CPUs ... Apr 17 23:37:31.125410 kernel: smpboot: x86: Booting SMP configuration: Apr 17 23:37:31.125425 kernel: .... node #0, CPUs: #1 Apr 17 23:37:31.125440 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Apr 17 23:37:31.125457 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 17 23:37:31.125472 kernel: smp: Brought up 1 node, 2 CPUs Apr 17 23:37:31.125486 kernel: smpboot: Max logical packages: 1 Apr 17 23:37:31.125504 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Apr 17 23:37:31.125519 kernel: devtmpfs: initialized Apr 17 23:37:31.125534 kernel: x86/mm: Memory block size: 128MB Apr 17 23:37:31.125549 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Apr 17 23:37:31.125565 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 17 23:37:31.125580 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 17 23:37:31.125595 kernel: pinctrl core: initialized pinctrl subsystem Apr 17 23:37:31.125620 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 17 23:37:31.125633 kernel: audit: initializing netlink subsys (disabled) Apr 17 23:37:31.125648 kernel: audit: type=2000 audit(1776469050.029:1): state=initialized audit_enabled=0 res=1 Apr 17 23:37:31.125660 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 17 23:37:31.125672 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 17 23:37:31.125684 kernel: cpuidle: using governor menu Apr 17 23:37:31.125697 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 17 23:37:31.125706 kernel: dca service started, version 1.12.1 Apr 17 23:37:31.125714 kernel: e820: reserve RAM buffer [mem 0x0437e000-0x07ffffff] Apr 17 23:37:31.125722 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Apr 17 23:37:31.125730 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 17 23:37:31.125741 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 17 23:37:31.125749 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 17 23:37:31.125757 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 17 23:37:31.125765 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 17 23:37:31.125773 kernel: ACPI: Added _OSI(Module Device) Apr 17 23:37:31.125781 kernel: ACPI: Added _OSI(Processor Device) Apr 17 23:37:31.125790 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 17 23:37:31.125798 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 17 23:37:31.125808 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 17 23:37:31.129503 kernel: ACPI: Interpreter enabled Apr 17 23:37:31.129521 kernel: ACPI: PM: (supports S0 S5) Apr 17 23:37:31.129535 kernel: ACPI: Using IOAPIC for interrupt routing Apr 17 23:37:31.129548 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 17 23:37:31.129561 kernel: PCI: Ignoring E820 reservations for host bridge windows Apr 17 23:37:31.129576 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Apr 17 23:37:31.129589 kernel: iommu: Default domain type: Translated Apr 17 23:37:31.131627 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 17 23:37:31.131641 kernel: efivars: Registered efivars operations Apr 17 23:37:31.131658 kernel: PCI: Using ACPI for IRQ routing Apr 17 23:37:31.131666 kernel: PCI: System does not support PCI Apr 17 23:37:31.131677 kernel: vgaarb: loaded Apr 17 23:37:31.131687 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Apr 17 23:37:31.131695 kernel: VFS: Disk quotas dquot_6.6.0 Apr 17 23:37:31.131703 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 17 23:37:31.131716 kernel: pnp: PnP ACPI init Apr 17 23:37:31.131724 kernel: pnp: PnP ACPI: found 3 devices Apr 17 23:37:31.131732 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 17 23:37:31.131747 kernel: NET: Registered PF_INET protocol family Apr 17 23:37:31.131756 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 17 23:37:31.131766 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Apr 17 23:37:31.131777 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 17 23:37:31.131785 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 17 23:37:31.131797 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Apr 17 23:37:31.131807 kernel: TCP: Hash tables configured (established 65536 bind 65536) Apr 17 23:37:31.131819 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 17 23:37:31.131828 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 17 23:37:31.131839 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 17 23:37:31.131851 kernel: NET: Registered PF_XDP protocol family Apr 17 23:37:31.131859 kernel: PCI: CLS 0 bytes, default 64 Apr 17 23:37:31.131868 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 17 23:37:31.131880 kernel: software IO TLB: mapped [mem 0x000000003a878000-0x000000003e878000] (64MB) Apr 17 23:37:31.131889 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 17 23:37:31.131900 kernel: Initialise system trusted keyrings Apr 17 23:37:31.131910 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Apr 17 23:37:31.131920 kernel: Key type asymmetric registered Apr 17 23:37:31.131932 kernel: Asymmetric key parser 'x509' registered Apr 17 23:37:31.131941 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 17 23:37:31.131949 kernel: io scheduler mq-deadline registered Apr 17 23:37:31.131961 kernel: io scheduler kyber registered Apr 17 23:37:31.131969 kernel: io scheduler bfq registered Apr 17 23:37:31.131977 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 17 23:37:31.131985 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 17 23:37:31.131993 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 17 23:37:31.132002 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Apr 17 23:37:31.132016 kernel: i8042: PNP: No PS/2 controller found. Apr 17 23:37:31.132171 kernel: rtc_cmos 00:02: registered as rtc0 Apr 17 23:37:31.132275 kernel: rtc_cmos 00:02: setting system clock to 2026-04-17T23:37:30 UTC (1776469050) Apr 17 23:37:31.132368 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Apr 17 23:37:31.132384 kernel: intel_pstate: CPU model not supported Apr 17 23:37:31.132392 kernel: efifb: probing for efifb Apr 17 23:37:31.132405 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Apr 17 23:37:31.132417 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Apr 17 23:37:31.132429 kernel: efifb: scrolling: redraw Apr 17 23:37:31.132437 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 17 23:37:31.132448 kernel: Console: switching to colour frame buffer device 128x48 Apr 17 23:37:31.132458 kernel: fb0: EFI VGA frame buffer device Apr 17 23:37:31.132467 kernel: pstore: Using crash dump compression: deflate Apr 17 23:37:31.132480 kernel: pstore: Registered efi_pstore as persistent store backend Apr 17 23:37:31.132488 kernel: NET: Registered PF_INET6 protocol family Apr 17 23:37:31.132499 kernel: Segment Routing with IPv6 Apr 17 23:37:31.132510 kernel: In-situ OAM (IOAM) with IPv6 Apr 17 23:37:31.132519 kernel: NET: Registered PF_PACKET protocol family Apr 17 23:37:31.132531 kernel: Key type dns_resolver registered Apr 17 23:37:31.132539 kernel: IPI shorthand broadcast: enabled Apr 17 23:37:31.132552 kernel: sched_clock: Marking stable (907002900, 49801000)->(1197654400, -240850500) Apr 17 23:37:31.132560 kernel: registered taskstats version 1 Apr 17 23:37:31.132569 kernel: Loading compiled-in X.509 certificates Apr 17 23:37:31.132581 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 39e9969c7f49062f0fc1d1fb72e8f874436eb94f' Apr 17 23:37:31.132589 kernel: Key type .fscrypt registered Apr 17 23:37:31.132610 kernel: Key type fscrypt-provisioning registered Apr 17 23:37:31.132619 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 17 23:37:31.132631 kernel: ima: Allocated hash algorithm: sha1 Apr 17 23:37:31.132639 kernel: ima: No architecture policies found Apr 17 23:37:31.132650 kernel: clk: Disabling unused clocks Apr 17 23:37:31.132660 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 17 23:37:31.132668 kernel: Write protecting the kernel read-only data: 36864k Apr 17 23:37:31.132681 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 17 23:37:31.132689 kernel: Run /init as init process Apr 17 23:37:31.132704 kernel: with arguments: Apr 17 23:37:31.132712 kernel: /init Apr 17 23:37:31.132721 kernel: with environment: Apr 17 23:37:31.132733 kernel: HOME=/ Apr 17 23:37:31.132745 kernel: TERM=linux Apr 17 23:37:31.132756 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:37:31.132771 systemd[1]: Detected virtualization microsoft. Apr 17 23:37:31.132780 systemd[1]: Detected architecture x86-64. Apr 17 23:37:31.132790 systemd[1]: Running in initrd. Apr 17 23:37:31.132798 systemd[1]: No hostname configured, using default hostname. Apr 17 23:37:31.132807 systemd[1]: Hostname set to . Apr 17 23:37:31.132820 systemd[1]: Initializing machine ID from random generator. Apr 17 23:37:31.132828 systemd[1]: Queued start job for default target initrd.target. Apr 17 23:37:31.132841 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:37:31.132850 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:37:31.132859 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 17 23:37:31.132873 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:37:31.132883 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 17 23:37:31.132895 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 17 23:37:31.132906 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 17 23:37:31.132920 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 17 23:37:31.132932 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:37:31.132943 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:37:31.132958 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:37:31.132973 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:37:31.132987 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:37:31.133003 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:37:31.133019 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:37:31.133035 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:37:31.133051 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 23:37:31.133067 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 17 23:37:31.133083 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:37:31.133103 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:37:31.133118 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:37:31.133135 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:37:31.133151 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 17 23:37:31.133167 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:37:31.133183 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 17 23:37:31.133199 systemd[1]: Starting systemd-fsck-usr.service... Apr 17 23:37:31.133215 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:37:31.133235 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:37:31.133250 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:37:31.133293 systemd-journald[177]: Collecting audit messages is disabled. Apr 17 23:37:31.133328 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 17 23:37:31.133348 systemd-journald[177]: Journal started Apr 17 23:37:31.133380 systemd-journald[177]: Runtime Journal (/run/log/journal/c8bc332d0da44c7e8ac8cf5a39e1ead4) is 8.0M, max 158.7M, 150.7M free. Apr 17 23:37:31.122120 systemd-modules-load[178]: Inserted module 'overlay' Apr 17 23:37:31.144556 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:37:31.154929 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:37:31.163361 systemd[1]: Finished systemd-fsck-usr.service. Apr 17 23:37:31.173827 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:37:31.183505 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 17 23:37:31.183559 kernel: Bridge firewalling registered Apr 17 23:37:31.183479 systemd-modules-load[178]: Inserted module 'br_netfilter' Apr 17 23:37:31.184944 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:37:31.196765 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:37:31.215786 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:37:31.220800 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:37:31.242782 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:37:31.247446 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:37:31.260847 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:37:31.265223 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:37:31.275940 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:37:31.287871 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 17 23:37:31.297237 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:37:31.307736 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:37:31.312402 dracut-cmdline[207]: dracut-dracut-053 Apr 17 23:37:31.314916 dracut-cmdline[207]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:37:31.358406 systemd-resolved[211]: Positive Trust Anchors: Apr 17 23:37:31.358957 systemd-resolved[211]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:37:31.358997 systemd-resolved[211]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:37:31.364012 systemd-resolved[211]: Defaulting to hostname 'linux'. Apr 17 23:37:31.365004 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:37:31.389714 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:37:31.415158 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:37:31.428621 kernel: SCSI subsystem initialized Apr 17 23:37:31.438618 kernel: Loading iSCSI transport class v2.0-870. Apr 17 23:37:31.449621 kernel: iscsi: registered transport (tcp) Apr 17 23:37:31.471586 kernel: iscsi: registered transport (qla4xxx) Apr 17 23:37:31.471657 kernel: QLogic iSCSI HBA Driver Apr 17 23:37:31.507580 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 17 23:37:31.519867 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 17 23:37:31.548622 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 17 23:37:31.548710 kernel: device-mapper: uevent: version 1.0.3 Apr 17 23:37:31.552091 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 17 23:37:31.594628 kernel: raid6: avx512x4 gen() 18457 MB/s Apr 17 23:37:31.612619 kernel: raid6: avx512x2 gen() 18145 MB/s Apr 17 23:37:31.631610 kernel: raid6: avx512x1 gen() 18246 MB/s Apr 17 23:37:31.650617 kernel: raid6: avx2x4 gen() 18279 MB/s Apr 17 23:37:31.668614 kernel: raid6: avx2x2 gen() 18508 MB/s Apr 17 23:37:31.688936 kernel: raid6: avx2x1 gen() 13936 MB/s Apr 17 23:37:31.688973 kernel: raid6: using algorithm avx2x2 gen() 18508 MB/s Apr 17 23:37:31.710070 kernel: raid6: .... xor() 21398 MB/s, rmw enabled Apr 17 23:37:31.710103 kernel: raid6: using avx512x2 recovery algorithm Apr 17 23:37:31.731627 kernel: xor: automatically using best checksumming function avx Apr 17 23:37:31.880625 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 17 23:37:31.889739 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:37:31.901804 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:37:31.916960 systemd-udevd[397]: Using default interface naming scheme 'v255'. Apr 17 23:37:31.921618 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:37:31.936784 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 17 23:37:31.950032 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Apr 17 23:37:31.978077 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:37:31.986865 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:37:32.031860 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:37:32.047280 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 17 23:37:32.081586 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 17 23:37:32.089834 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:37:32.093714 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:37:32.101093 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:37:32.118857 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 17 23:37:32.140684 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:37:32.149196 kernel: cryptd: max_cpu_qlen set to 1000 Apr 17 23:37:32.154326 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:37:32.154464 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:37:32.158451 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:37:32.162009 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:37:32.162319 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:37:32.183366 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:37:32.202434 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:37:32.207221 kernel: AVX2 version of gcm_enc/dec engaged. Apr 17 23:37:32.207248 kernel: AES CTR mode by8 optimization enabled Apr 17 23:37:32.215680 kernel: hv_vmbus: Vmbus version:5.2 Apr 17 23:37:32.231078 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:37:32.232568 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:37:32.248878 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:37:32.259108 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 17 23:37:32.265204 kernel: hv_vmbus: registering driver hyperv_keyboard Apr 17 23:37:32.274575 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Apr 17 23:37:32.274640 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 17 23:37:32.278015 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 17 23:37:32.290618 kernel: hv_vmbus: registering driver hid_hyperv Apr 17 23:37:32.291741 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:37:32.311676 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Apr 17 23:37:32.311729 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Apr 17 23:37:32.312110 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:37:32.323617 kernel: hv_vmbus: registering driver hv_netvsc Apr 17 23:37:32.330854 kernel: hv_vmbus: registering driver hv_storvsc Apr 17 23:37:32.330899 kernel: PTP clock support registered Apr 17 23:37:32.339558 kernel: hv_utils: Registering HyperV Utility Driver Apr 17 23:37:32.339617 kernel: hv_vmbus: registering driver hv_utils Apr 17 23:37:32.342328 kernel: hv_utils: Heartbeat IC version 3.0 Apr 17 23:37:32.344299 kernel: hv_utils: Shutdown IC version 3.2 Apr 17 23:37:32.346392 kernel: hv_utils: TimeSync IC version 4.0 Apr 17 23:37:33.154009 systemd-resolved[211]: Clock change detected. Flushing caches. Apr 17 23:37:33.160681 kernel: scsi host1: storvsc_host_t Apr 17 23:37:33.163670 kernel: scsi host0: storvsc_host_t Apr 17 23:37:33.167665 kernel: scsi 1:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Apr 17 23:37:33.168808 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:37:33.177026 kernel: scsi 1:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Apr 17 23:37:33.194537 kernel: sr 1:0:0:2: [sr0] scsi-1 drive Apr 17 23:37:33.194812 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 17 23:37:33.197721 kernel: sr 1:0:0:2: Attached scsi CD-ROM sr0 Apr 17 23:37:33.214549 kernel: sd 1:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Apr 17 23:37:33.214839 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Apr 17 23:37:33.217103 kernel: sd 1:0:0:0: [sda] Write Protect is off Apr 17 23:37:33.222508 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#276 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 17 23:37:33.222839 kernel: sd 1:0:0:0: [sda] Mode Sense: 0f 00 10 00 Apr 17 23:37:33.222986 kernel: sd 1:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Apr 17 23:37:33.241046 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:37:33.241112 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Apr 17 23:37:33.247690 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#304 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 17 23:37:33.369620 kernel: hv_netvsc 7ced8d75-dd31-7ced-8d75-dd317ced8d75 eth0: VF slot 1 added Apr 17 23:37:33.379672 kernel: hv_vmbus: registering driver hv_pci Apr 17 23:37:33.379726 kernel: hv_pci 4b8d603c-4603-4562-aae4-e66c11de137d: PCI VMBus probing: Using version 0x10004 Apr 17 23:37:33.388889 kernel: hv_pci 4b8d603c-4603-4562-aae4-e66c11de137d: PCI host bridge to bus 4603:00 Apr 17 23:37:33.389138 kernel: pci_bus 4603:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Apr 17 23:37:33.392732 kernel: pci_bus 4603:00: No busn resource found for root bus, will use [bus 00-ff] Apr 17 23:37:33.397749 kernel: pci 4603:00:02.0: [15b3:1016] type 00 class 0x020000 Apr 17 23:37:33.401850 kernel: pci 4603:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Apr 17 23:37:33.405877 kernel: pci 4603:00:02.0: enabling Extended Tags Apr 17 23:37:33.416761 kernel: pci 4603:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 4603:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Apr 17 23:37:33.422989 kernel: pci_bus 4603:00: busn_res: [bus 00-ff] end is updated to 00 Apr 17 23:37:33.423184 kernel: pci 4603:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Apr 17 23:37:33.607532 kernel: mlx5_core 4603:00:02.0: enabling device (0000 -> 0002) Apr 17 23:37:33.611673 kernel: mlx5_core 4603:00:02.0: firmware version: 14.30.5026 Apr 17 23:37:33.732459 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Apr 17 23:37:33.741200 kernel: BTRFS: device fsid 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 devid 1 transid 32 /dev/sda3 scanned by (udev-worker) (467) Apr 17 23:37:33.745668 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (449) Apr 17 23:37:33.781545 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Apr 17 23:37:33.798854 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 17 23:37:33.808140 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Apr 17 23:37:33.812324 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Apr 17 23:37:33.827862 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 17 23:37:33.844682 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:37:33.854672 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:37:33.879897 kernel: hv_netvsc 7ced8d75-dd31-7ced-8d75-dd317ced8d75 eth0: VF registering: eth1 Apr 17 23:37:33.880174 kernel: mlx5_core 4603:00:02.0 eth1: joined to eth0 Apr 17 23:37:33.888627 kernel: mlx5_core 4603:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Apr 17 23:37:33.896692 kernel: mlx5_core 4603:00:02.0 enP17923s1: renamed from eth1 Apr 17 23:37:34.868611 disk-uuid[608]: The operation has completed successfully. Apr 17 23:37:34.871806 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:37:34.965479 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 17 23:37:34.965597 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 17 23:37:34.981882 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 17 23:37:34.985736 sh[722]: Success Apr 17 23:37:35.015698 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 17 23:37:35.283134 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 17 23:37:35.295769 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 17 23:37:35.300108 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 17 23:37:35.332085 kernel: BTRFS info (device dm-0): first mount of filesystem 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 Apr 17 23:37:35.332143 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:37:35.336018 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 17 23:37:35.338621 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 17 23:37:35.340960 kernel: BTRFS info (device dm-0): using free space tree Apr 17 23:37:35.783901 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 17 23:37:35.790410 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 17 23:37:35.800833 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 17 23:37:35.804827 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 17 23:37:35.829679 kernel: BTRFS info (device sda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:37:35.829733 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:37:35.834249 kernel: BTRFS info (device sda6): using free space tree Apr 17 23:37:35.871679 kernel: BTRFS info (device sda6): auto enabling async discard Apr 17 23:37:35.882639 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 17 23:37:35.888797 kernel: BTRFS info (device sda6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:37:35.898262 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 17 23:37:35.908943 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 17 23:37:35.929030 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:37:35.942813 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:37:35.970753 systemd-networkd[906]: lo: Link UP Apr 17 23:37:35.970763 systemd-networkd[906]: lo: Gained carrier Apr 17 23:37:35.973127 systemd-networkd[906]: Enumeration completed Apr 17 23:37:35.973228 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:37:35.974294 systemd-networkd[906]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:37:35.974299 systemd-networkd[906]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:37:35.976983 systemd[1]: Reached target network.target - Network. Apr 17 23:37:36.046705 kernel: mlx5_core 4603:00:02.0 enP17923s1: Link up Apr 17 23:37:36.091757 kernel: hv_netvsc 7ced8d75-dd31-7ced-8d75-dd317ced8d75 eth0: Data path switched to VF: enP17923s1 Apr 17 23:37:36.091903 systemd-networkd[906]: enP17923s1: Link UP Apr 17 23:37:36.092028 systemd-networkd[906]: eth0: Link UP Apr 17 23:37:36.094392 systemd-networkd[906]: eth0: Gained carrier Apr 17 23:37:36.094407 systemd-networkd[906]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:37:36.105875 systemd-networkd[906]: enP17923s1: Gained carrier Apr 17 23:37:36.142721 systemd-networkd[906]: eth0: DHCPv4 address 10.0.0.22/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 17 23:37:36.997648 ignition[883]: Ignition 2.19.0 Apr 17 23:37:36.997678 ignition[883]: Stage: fetch-offline Apr 17 23:37:36.999546 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:37:36.997725 ignition[883]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:37:37.007941 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 17 23:37:36.997735 ignition[883]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 17 23:37:36.997840 ignition[883]: parsed url from cmdline: "" Apr 17 23:37:36.997845 ignition[883]: no config URL provided Apr 17 23:37:36.997851 ignition[883]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:37:36.997863 ignition[883]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:37:36.997869 ignition[883]: failed to fetch config: resource requires networking Apr 17 23:37:36.998124 ignition[883]: Ignition finished successfully Apr 17 23:37:37.028509 ignition[914]: Ignition 2.19.0 Apr 17 23:37:37.028516 ignition[914]: Stage: fetch Apr 17 23:37:37.028781 ignition[914]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:37:37.028791 ignition[914]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 17 23:37:37.028915 ignition[914]: parsed url from cmdline: "" Apr 17 23:37:37.028919 ignition[914]: no config URL provided Apr 17 23:37:37.028926 ignition[914]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:37:37.028933 ignition[914]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:37:37.028959 ignition[914]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Apr 17 23:37:37.137473 ignition[914]: GET result: OK Apr 17 23:37:37.137560 ignition[914]: config has been read from IMDS userdata Apr 17 23:37:37.137592 ignition[914]: parsing config with SHA512: d88bc83b8484280f4b694fa2c26d0c94824f6642a925ae2b7d591f61eb4a30ad77567f4db95b30c1a24acf8de56fadf36864098a173b02f6d587ad3b71b0051e Apr 17 23:37:37.141989 unknown[914]: fetched base config from "system" Apr 17 23:37:37.142468 ignition[914]: fetch: fetch complete Apr 17 23:37:37.142000 unknown[914]: fetched base config from "system" Apr 17 23:37:37.142475 ignition[914]: fetch: fetch passed Apr 17 23:37:37.142005 unknown[914]: fetched user config from "azure" Apr 17 23:37:37.142533 ignition[914]: Ignition finished successfully Apr 17 23:37:37.144174 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 17 23:37:37.163917 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 17 23:37:37.182875 ignition[920]: Ignition 2.19.0 Apr 17 23:37:37.182888 ignition[920]: Stage: kargs Apr 17 23:37:37.183130 ignition[920]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:37:37.187241 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 17 23:37:37.183146 ignition[920]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 17 23:37:37.184070 ignition[920]: kargs: kargs passed Apr 17 23:37:37.184129 ignition[920]: Ignition finished successfully Apr 17 23:37:37.208890 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 17 23:37:37.227246 ignition[926]: Ignition 2.19.0 Apr 17 23:37:37.227259 ignition[926]: Stage: disks Apr 17 23:37:37.227496 ignition[926]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:37:37.230950 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 17 23:37:37.227510 ignition[926]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 17 23:37:37.234917 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 17 23:37:37.228442 ignition[926]: disks: disks passed Apr 17 23:37:37.228493 ignition[926]: Ignition finished successfully Apr 17 23:37:37.253270 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 23:37:37.256838 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:37:37.267140 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:37:37.267261 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:37:37.281979 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 17 23:37:37.360471 systemd-fsck[934]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Apr 17 23:37:37.365739 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 17 23:37:37.377865 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 17 23:37:37.472687 kernel: EXT4-fs (sda9): mounted filesystem d3c199f8-8065-4f33-a75b-da2f09d4fc39 r/w with ordered data mode. Quota mode: none. Apr 17 23:37:37.473386 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 17 23:37:37.476443 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 17 23:37:37.516821 systemd-networkd[906]: eth0: Gained IPv6LL Apr 17 23:37:37.529842 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:37:37.547676 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (945) Apr 17 23:37:37.555140 kernel: BTRFS info (device sda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:37:37.555199 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:37:37.558662 kernel: BTRFS info (device sda6): using free space tree Apr 17 23:37:37.562842 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 17 23:37:37.569799 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 17 23:37:37.576406 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 17 23:37:37.584238 kernel: BTRFS info (device sda6): auto enabling async discard Apr 17 23:37:37.583325 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:37:37.593308 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:37:37.593513 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 17 23:37:37.602900 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 17 23:37:38.376693 coreos-metadata[960]: Apr 17 23:37:38.376 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 17 23:37:38.383745 coreos-metadata[960]: Apr 17 23:37:38.383 INFO Fetch successful Apr 17 23:37:38.386762 coreos-metadata[960]: Apr 17 23:37:38.386 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Apr 17 23:37:38.403189 coreos-metadata[960]: Apr 17 23:37:38.403 INFO Fetch successful Apr 17 23:37:38.420732 coreos-metadata[960]: Apr 17 23:37:38.420 INFO wrote hostname ci-4081.3.6-n-b8c45c9493 to /sysroot/etc/hostname Apr 17 23:37:38.427698 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 17 23:37:38.437936 initrd-setup-root[976]: cut: /sysroot/etc/passwd: No such file or directory Apr 17 23:37:38.466518 initrd-setup-root[983]: cut: /sysroot/etc/group: No such file or directory Apr 17 23:37:38.473711 initrd-setup-root[990]: cut: /sysroot/etc/shadow: No such file or directory Apr 17 23:37:38.484599 initrd-setup-root[997]: cut: /sysroot/etc/gshadow: No such file or directory Apr 17 23:37:39.542202 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 17 23:37:39.553828 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 17 23:37:39.559825 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 17 23:37:39.570511 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 17 23:37:39.580044 kernel: BTRFS info (device sda6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:37:39.601737 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 17 23:37:39.612054 ignition[1065]: INFO : Ignition 2.19.0 Apr 17 23:37:39.612054 ignition[1065]: INFO : Stage: mount Apr 17 23:37:39.616637 ignition[1065]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:37:39.616637 ignition[1065]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 17 23:37:39.616637 ignition[1065]: INFO : mount: mount passed Apr 17 23:37:39.616637 ignition[1065]: INFO : Ignition finished successfully Apr 17 23:37:39.614661 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 17 23:37:39.638769 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 17 23:37:39.651297 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:37:39.674679 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1076) Apr 17 23:37:39.682326 kernel: BTRFS info (device sda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:37:39.682374 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:37:39.684767 kernel: BTRFS info (device sda6): using free space tree Apr 17 23:37:39.697684 kernel: BTRFS info (device sda6): auto enabling async discard Apr 17 23:37:39.699155 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:37:39.721232 ignition[1093]: INFO : Ignition 2.19.0 Apr 17 23:37:39.721232 ignition[1093]: INFO : Stage: files Apr 17 23:37:39.726378 ignition[1093]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:37:39.726378 ignition[1093]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 17 23:37:39.726378 ignition[1093]: DEBUG : files: compiled without relabeling support, skipping Apr 17 23:37:39.737363 ignition[1093]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 17 23:37:39.737363 ignition[1093]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 17 23:37:39.846945 ignition[1093]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 17 23:37:39.851697 ignition[1093]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 17 23:37:39.851697 ignition[1093]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 17 23:37:39.851697 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:37:39.851697 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 17 23:37:39.847430 unknown[1093]: wrote ssh authorized keys file for user: core Apr 17 23:37:39.933600 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 17 23:37:40.619670 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:37:40.625764 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 17 23:37:40.625764 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 17 23:37:40.625764 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:37:40.625764 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:37:40.625764 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:37:40.625764 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:37:40.625764 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:37:40.625764 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:37:40.625764 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:37:40.625764 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:37:40.625764 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:37:40.625764 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:37:40.625764 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:37:40.625764 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 17 23:37:40.984724 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 17 23:37:42.426423 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:37:42.426423 ignition[1093]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 17 23:37:42.454238 ignition[1093]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:37:42.459923 ignition[1093]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:37:42.459923 ignition[1093]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 17 23:37:42.459923 ignition[1093]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 17 23:37:42.479492 ignition[1093]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 17 23:37:42.479492 ignition[1093]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:37:42.479492 ignition[1093]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:37:42.479492 ignition[1093]: INFO : files: files passed Apr 17 23:37:42.479492 ignition[1093]: INFO : Ignition finished successfully Apr 17 23:37:42.469937 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 17 23:37:42.500924 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 17 23:37:42.512120 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 17 23:37:42.521576 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 17 23:37:42.521719 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 17 23:37:42.533370 initrd-setup-root-after-ignition[1126]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:37:42.541734 initrd-setup-root-after-ignition[1122]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:37:42.541734 initrd-setup-root-after-ignition[1122]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:37:42.536933 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:37:42.542150 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 17 23:37:42.564834 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 17 23:37:42.588428 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 17 23:37:42.588523 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 17 23:37:42.601419 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 17 23:37:42.609259 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 17 23:37:42.609417 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 17 23:37:42.619844 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 17 23:37:42.634159 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:37:42.646827 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 17 23:37:42.661335 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:37:42.661593 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:37:42.662760 systemd[1]: Stopped target timers.target - Timer Units. Apr 17 23:37:42.663237 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 17 23:37:42.663398 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:37:42.664242 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 17 23:37:42.664716 systemd[1]: Stopped target basic.target - Basic System. Apr 17 23:37:42.665129 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 17 23:37:42.665628 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:37:42.666105 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 17 23:37:42.666545 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 17 23:37:42.667493 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:37:42.667963 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 17 23:37:42.668478 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 17 23:37:42.668915 systemd[1]: Stopped target swap.target - Swaps. Apr 17 23:37:42.669418 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 17 23:37:42.669563 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:37:42.670365 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:37:42.671229 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:37:42.671755 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 17 23:37:42.710565 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:37:42.717206 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 17 23:37:42.717380 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 17 23:37:42.792983 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 17 23:37:42.796502 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:37:42.796761 systemd[1]: ignition-files.service: Deactivated successfully. Apr 17 23:37:42.796876 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 17 23:37:42.797177 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 17 23:37:42.797284 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 17 23:37:42.819649 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 17 23:37:42.833014 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 17 23:37:42.835800 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 17 23:37:42.836093 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:37:42.841919 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 17 23:37:42.842048 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:37:42.862930 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 17 23:37:42.863251 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 17 23:37:42.872379 ignition[1146]: INFO : Ignition 2.19.0 Apr 17 23:37:42.872379 ignition[1146]: INFO : Stage: umount Apr 17 23:37:42.872379 ignition[1146]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:37:42.872379 ignition[1146]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 17 23:37:42.886889 ignition[1146]: INFO : umount: umount passed Apr 17 23:37:42.886889 ignition[1146]: INFO : Ignition finished successfully Apr 17 23:37:42.877552 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 17 23:37:42.877885 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 17 23:37:42.884057 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 17 23:37:42.884111 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 17 23:37:42.889717 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 17 23:37:42.889775 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 17 23:37:42.894905 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 17 23:37:42.894958 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 17 23:37:42.900744 systemd[1]: Stopped target network.target - Network. Apr 17 23:37:42.905561 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 17 23:37:42.905620 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:37:42.911630 systemd[1]: Stopped target paths.target - Path Units. Apr 17 23:37:42.914125 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 17 23:37:42.919499 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:37:42.925550 systemd[1]: Stopped target slices.target - Slice Units. Apr 17 23:37:42.928445 systemd[1]: Stopped target sockets.target - Socket Units. Apr 17 23:37:42.931270 systemd[1]: iscsid.socket: Deactivated successfully. Apr 17 23:37:42.931322 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:37:42.936747 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 17 23:37:42.936795 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:37:42.942376 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 17 23:37:42.942429 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 17 23:37:42.945280 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 17 23:37:42.945327 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 17 23:37:42.952451 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 17 23:37:42.957992 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 17 23:37:42.979443 systemd-networkd[906]: eth0: DHCPv6 lease lost Apr 17 23:37:42.982881 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 17 23:37:42.982986 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 17 23:37:42.999018 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 17 23:37:42.999195 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 17 23:37:43.018592 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 17 23:37:43.018695 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:37:43.056762 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 17 23:37:43.059792 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 17 23:37:43.059875 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:37:43.066160 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 23:37:43.069308 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:37:43.075946 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 17 23:37:43.076004 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 17 23:37:43.084970 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 17 23:37:43.087868 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:37:43.105995 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:37:43.125268 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 17 23:37:43.128755 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:37:43.136787 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 17 23:37:43.136852 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 17 23:37:43.145210 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 17 23:37:43.145266 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:37:43.154048 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 17 23:37:43.154120 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:37:43.159411 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 17 23:37:43.159452 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 17 23:37:43.162627 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:37:43.162687 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:37:43.185343 kernel: hv_netvsc 7ced8d75-dd31-7ced-8d75-dd317ced8d75 eth0: Data path switched from VF: enP17923s1 Apr 17 23:37:43.186900 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 17 23:37:43.190125 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 17 23:37:43.190204 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:37:43.193879 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 17 23:37:43.193922 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:37:43.201027 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 17 23:37:43.201082 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:37:43.204535 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:37:43.204585 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:37:43.205029 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 17 23:37:43.205141 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 17 23:37:43.214555 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 17 23:37:43.214649 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 17 23:37:43.560807 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 17 23:37:43.611599 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 17 23:37:43.611739 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 17 23:37:43.617856 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 17 23:37:43.623161 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 17 23:37:43.623246 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 17 23:37:43.637903 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 17 23:37:43.647196 systemd[1]: Switching root. Apr 17 23:37:43.720910 systemd-journald[177]: Journal stopped Apr 17 23:37:31.120559 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Apr 17 22:11:20 -00 2026 Apr 17 23:37:31.120591 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:37:31.120619 kernel: BIOS-provided physical RAM map: Apr 17 23:37:31.120629 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 17 23:37:31.120639 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Apr 17 23:37:31.120650 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000000437dfff] usable Apr 17 23:37:31.120663 kernel: BIOS-e820: [mem 0x000000000437e000-0x000000000477dfff] reserved Apr 17 23:37:31.120673 kernel: BIOS-e820: [mem 0x000000000477e000-0x000000003ff1efff] usable Apr 17 23:37:31.120686 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ff73fff] type 20 Apr 17 23:37:31.120697 kernel: BIOS-e820: [mem 0x000000003ff74000-0x000000003ffc8fff] reserved Apr 17 23:37:31.120708 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Apr 17 23:37:31.120719 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Apr 17 23:37:31.120730 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Apr 17 23:37:31.120741 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Apr 17 23:37:31.120759 kernel: printk: bootconsole [earlyser0] enabled Apr 17 23:37:31.120770 kernel: NX (Execute Disable) protection: active Apr 17 23:37:31.120783 kernel: APIC: Static calls initialized Apr 17 23:37:31.120794 kernel: efi: EFI v2.7 by Microsoft Apr 17 23:37:31.120805 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3f420418 Apr 17 23:37:31.120817 kernel: SMBIOS 3.1.0 present. Apr 17 23:37:31.120829 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/08/2026 Apr 17 23:37:31.120841 kernel: Hypervisor detected: Microsoft Hyper-V Apr 17 23:37:31.120853 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Apr 17 23:37:31.120864 kernel: Hyper-V: Host Build 10.0.26102.1277-1-0 Apr 17 23:37:31.120875 kernel: Hyper-V: Nested features: 0x1e0101 Apr 17 23:37:31.120893 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Apr 17 23:37:31.120903 kernel: Hyper-V: Using hypercall for remote TLB flush Apr 17 23:37:31.120916 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Apr 17 23:37:31.120928 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Apr 17 23:37:31.120940 kernel: tsc: Marking TSC unstable due to running on Hyper-V Apr 17 23:37:31.120953 kernel: tsc: Detected 2593.906 MHz processor Apr 17 23:37:31.120966 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 17 23:37:31.120977 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 17 23:37:31.120990 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Apr 17 23:37:31.121007 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 17 23:37:31.121020 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 17 23:37:31.121033 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Apr 17 23:37:31.121044 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Apr 17 23:37:31.121054 kernel: Using GB pages for direct mapping Apr 17 23:37:31.121066 kernel: Secure boot disabled Apr 17 23:37:31.121085 kernel: ACPI: Early table checksum verification disabled Apr 17 23:37:31.121100 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Apr 17 23:37:31.121115 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 17 23:37:31.121129 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 17 23:37:31.121144 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Apr 17 23:37:31.121157 kernel: ACPI: FACS 0x000000003FFFE000 000040 Apr 17 23:37:31.121169 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 17 23:37:31.121181 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 17 23:37:31.121196 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 17 23:37:31.121208 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 17 23:37:31.121221 kernel: ACPI: SRAT 0x000000003FFD4000 0001E0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 17 23:37:31.121235 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 17 23:37:31.121249 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Apr 17 23:37:31.121262 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Apr 17 23:37:31.121275 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Apr 17 23:37:31.121288 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Apr 17 23:37:31.121302 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Apr 17 23:37:31.121319 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Apr 17 23:37:31.121332 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Apr 17 23:37:31.121346 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd41df] Apr 17 23:37:31.121360 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Apr 17 23:37:31.121373 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 17 23:37:31.121387 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 17 23:37:31.121400 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Apr 17 23:37:31.121414 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Apr 17 23:37:31.121427 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Apr 17 23:37:31.121443 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Apr 17 23:37:31.121457 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Apr 17 23:37:31.121471 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Apr 17 23:37:31.121485 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Apr 17 23:37:31.121498 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Apr 17 23:37:31.121512 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Apr 17 23:37:31.121525 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Apr 17 23:37:31.121539 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Apr 17 23:37:31.121555 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Apr 17 23:37:31.121569 kernel: Zone ranges: Apr 17 23:37:31.121583 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 17 23:37:31.121596 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 17 23:37:31.121746 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Apr 17 23:37:31.121760 kernel: Movable zone start for each node Apr 17 23:37:31.121773 kernel: Early memory node ranges Apr 17 23:37:31.121786 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 17 23:37:31.121798 kernel: node 0: [mem 0x0000000000100000-0x000000000437dfff] Apr 17 23:37:31.121829 kernel: node 0: [mem 0x000000000477e000-0x000000003ff1efff] Apr 17 23:37:31.121855 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Apr 17 23:37:31.121876 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Apr 17 23:37:31.121886 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Apr 17 23:37:31.121899 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 17 23:37:31.121912 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 17 23:37:31.121925 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Apr 17 23:37:31.121938 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Apr 17 23:37:31.121950 kernel: ACPI: PM-Timer IO Port: 0x408 Apr 17 23:37:31.121964 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Apr 17 23:37:31.121977 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Apr 17 23:37:31.121991 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 17 23:37:31.122005 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 17 23:37:31.122018 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Apr 17 23:37:31.122029 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 17 23:37:31.122041 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Apr 17 23:37:31.122054 kernel: Booting paravirtualized kernel on Hyper-V Apr 17 23:37:31.122066 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 17 23:37:31.122089 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 17 23:37:31.122105 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 17 23:37:31.122116 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 17 23:37:31.122127 kernel: pcpu-alloc: [0] 0 1 Apr 17 23:37:31.122139 kernel: Hyper-V: PV spinlocks enabled Apr 17 23:37:31.122152 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 17 23:37:31.122166 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:37:31.122178 kernel: random: crng init done Apr 17 23:37:31.122194 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Apr 17 23:37:31.122206 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 17 23:37:31.122219 kernel: Fallback order for Node 0: 0 Apr 17 23:37:31.122232 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2061321 Apr 17 23:37:31.122244 kernel: Policy zone: Normal Apr 17 23:37:31.122257 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 17 23:37:31.122270 kernel: software IO TLB: area num 2. Apr 17 23:37:31.122285 kernel: Memory: 8066036K/8383228K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 316932K reserved, 0K cma-reserved) Apr 17 23:37:31.122299 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 17 23:37:31.122327 kernel: ftrace: allocating 37996 entries in 149 pages Apr 17 23:37:31.122342 kernel: ftrace: allocated 149 pages with 4 groups Apr 17 23:37:31.122356 kernel: Dynamic Preempt: voluntary Apr 17 23:37:31.122374 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 17 23:37:31.122394 kernel: rcu: RCU event tracing is enabled. Apr 17 23:37:31.122409 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 17 23:37:31.122424 kernel: Trampoline variant of Tasks RCU enabled. Apr 17 23:37:31.122439 kernel: Rude variant of Tasks RCU enabled. Apr 17 23:37:31.122455 kernel: Tracing variant of Tasks RCU enabled. Apr 17 23:37:31.122472 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 17 23:37:31.122488 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 17 23:37:31.122503 kernel: Using NULL legacy PIC Apr 17 23:37:31.122518 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Apr 17 23:37:31.122533 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 17 23:37:31.122548 kernel: Console: colour dummy device 80x25 Apr 17 23:37:31.122563 kernel: printk: console [tty1] enabled Apr 17 23:37:31.122578 kernel: printk: console [ttyS0] enabled Apr 17 23:37:31.122596 kernel: printk: bootconsole [earlyser0] disabled Apr 17 23:37:31.122621 kernel: ACPI: Core revision 20230628 Apr 17 23:37:31.122633 kernel: Failed to register legacy timer interrupt Apr 17 23:37:31.124641 kernel: APIC: Switch to symmetric I/O mode setup Apr 17 23:37:31.124660 kernel: Hyper-V: enabling crash_kexec_post_notifiers Apr 17 23:37:31.124676 kernel: Hyper-V: Using IPI hypercalls Apr 17 23:37:31.124690 kernel: APIC: send_IPI() replaced with hv_send_ipi() Apr 17 23:37:31.124704 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Apr 17 23:37:31.124719 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Apr 17 23:37:31.124738 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Apr 17 23:37:31.124753 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Apr 17 23:37:31.124767 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Apr 17 23:37:31.124781 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Apr 17 23:37:31.124795 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 17 23:37:31.124809 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 17 23:37:31.124823 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 17 23:37:31.124837 kernel: Spectre V2 : Mitigation: Retpolines Apr 17 23:37:31.124858 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 17 23:37:31.124872 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 17 23:37:31.124890 kernel: RETBleed: Vulnerable Apr 17 23:37:31.124905 kernel: Speculative Store Bypass: Vulnerable Apr 17 23:37:31.124923 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Apr 17 23:37:31.124936 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 17 23:37:31.124951 kernel: active return thunk: its_return_thunk Apr 17 23:37:31.124965 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 17 23:37:31.124980 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 17 23:37:31.124995 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 17 23:37:31.125010 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 17 23:37:31.125025 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 17 23:37:31.125044 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 17 23:37:31.125059 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 17 23:37:31.125074 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 17 23:37:31.125088 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 17 23:37:31.125103 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 17 23:37:31.125118 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 17 23:37:31.125133 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 17 23:37:31.125148 kernel: Freeing SMP alternatives memory: 32K Apr 17 23:37:31.125163 kernel: pid_max: default: 32768 minimum: 301 Apr 17 23:37:31.125178 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 17 23:37:31.125192 kernel: landlock: Up and running. Apr 17 23:37:31.125207 kernel: SELinux: Initializing. Apr 17 23:37:31.125225 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 17 23:37:31.125240 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 17 23:37:31.125255 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Apr 17 23:37:31.125270 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:37:31.125286 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:37:31.125301 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:37:31.125316 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Apr 17 23:37:31.125331 kernel: signal: max sigframe size: 3632 Apr 17 23:37:31.125346 kernel: rcu: Hierarchical SRCU implementation. Apr 17 23:37:31.125365 kernel: rcu: Max phase no-delay instances is 400. Apr 17 23:37:31.125380 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 17 23:37:31.125395 kernel: smp: Bringing up secondary CPUs ... Apr 17 23:37:31.125410 kernel: smpboot: x86: Booting SMP configuration: Apr 17 23:37:31.125425 kernel: .... node #0, CPUs: #1 Apr 17 23:37:31.125440 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Apr 17 23:37:31.125457 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 17 23:37:31.125472 kernel: smp: Brought up 1 node, 2 CPUs Apr 17 23:37:31.125486 kernel: smpboot: Max logical packages: 1 Apr 17 23:37:31.125504 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Apr 17 23:37:31.125519 kernel: devtmpfs: initialized Apr 17 23:37:31.125534 kernel: x86/mm: Memory block size: 128MB Apr 17 23:37:31.125549 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Apr 17 23:37:31.125565 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 17 23:37:31.125580 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 17 23:37:31.125595 kernel: pinctrl core: initialized pinctrl subsystem Apr 17 23:37:31.125620 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 17 23:37:31.125633 kernel: audit: initializing netlink subsys (disabled) Apr 17 23:37:31.125648 kernel: audit: type=2000 audit(1776469050.029:1): state=initialized audit_enabled=0 res=1 Apr 17 23:37:31.125660 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 17 23:37:31.125672 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 17 23:37:31.125684 kernel: cpuidle: using governor menu Apr 17 23:37:31.125697 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 17 23:37:31.125706 kernel: dca service started, version 1.12.1 Apr 17 23:37:31.125714 kernel: e820: reserve RAM buffer [mem 0x0437e000-0x07ffffff] Apr 17 23:37:31.125722 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Apr 17 23:37:31.125730 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 17 23:37:31.125741 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 17 23:37:31.125749 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 17 23:37:31.125757 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 17 23:37:31.125765 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 17 23:37:31.125773 kernel: ACPI: Added _OSI(Module Device) Apr 17 23:37:31.125781 kernel: ACPI: Added _OSI(Processor Device) Apr 17 23:37:31.125790 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 17 23:37:31.125798 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 17 23:37:31.125808 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 17 23:37:31.129503 kernel: ACPI: Interpreter enabled Apr 17 23:37:31.129521 kernel: ACPI: PM: (supports S0 S5) Apr 17 23:37:31.129535 kernel: ACPI: Using IOAPIC for interrupt routing Apr 17 23:37:31.129548 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 17 23:37:31.129561 kernel: PCI: Ignoring E820 reservations for host bridge windows Apr 17 23:37:31.129576 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Apr 17 23:37:31.129589 kernel: iommu: Default domain type: Translated Apr 17 23:37:31.131627 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 17 23:37:31.131641 kernel: efivars: Registered efivars operations Apr 17 23:37:31.131658 kernel: PCI: Using ACPI for IRQ routing Apr 17 23:37:31.131666 kernel: PCI: System does not support PCI Apr 17 23:37:31.131677 kernel: vgaarb: loaded Apr 17 23:37:31.131687 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Apr 17 23:37:31.131695 kernel: VFS: Disk quotas dquot_6.6.0 Apr 17 23:37:31.131703 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 17 23:37:31.131716 kernel: pnp: PnP ACPI init Apr 17 23:37:31.131724 kernel: pnp: PnP ACPI: found 3 devices Apr 17 23:37:31.131732 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 17 23:37:31.131747 kernel: NET: Registered PF_INET protocol family Apr 17 23:37:31.131756 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 17 23:37:31.131766 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Apr 17 23:37:31.131777 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 17 23:37:31.131785 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 17 23:37:31.131797 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Apr 17 23:37:31.131807 kernel: TCP: Hash tables configured (established 65536 bind 65536) Apr 17 23:37:31.131819 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 17 23:37:31.131828 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 17 23:37:31.131839 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 17 23:37:31.131851 kernel: NET: Registered PF_XDP protocol family Apr 17 23:37:31.131859 kernel: PCI: CLS 0 bytes, default 64 Apr 17 23:37:31.131868 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 17 23:37:31.131880 kernel: software IO TLB: mapped [mem 0x000000003a878000-0x000000003e878000] (64MB) Apr 17 23:37:31.131889 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 17 23:37:31.131900 kernel: Initialise system trusted keyrings Apr 17 23:37:31.131910 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Apr 17 23:37:31.131920 kernel: Key type asymmetric registered Apr 17 23:37:31.131932 kernel: Asymmetric key parser 'x509' registered Apr 17 23:37:31.131941 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 17 23:37:31.131949 kernel: io scheduler mq-deadline registered Apr 17 23:37:31.131961 kernel: io scheduler kyber registered Apr 17 23:37:31.131969 kernel: io scheduler bfq registered Apr 17 23:37:31.131977 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 17 23:37:31.131985 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 17 23:37:31.131993 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 17 23:37:31.132002 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Apr 17 23:37:31.132016 kernel: i8042: PNP: No PS/2 controller found. Apr 17 23:37:31.132171 kernel: rtc_cmos 00:02: registered as rtc0 Apr 17 23:37:31.132275 kernel: rtc_cmos 00:02: setting system clock to 2026-04-17T23:37:30 UTC (1776469050) Apr 17 23:37:31.132368 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Apr 17 23:37:31.132384 kernel: intel_pstate: CPU model not supported Apr 17 23:37:31.132392 kernel: efifb: probing for efifb Apr 17 23:37:31.132405 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Apr 17 23:37:31.132417 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Apr 17 23:37:31.132429 kernel: efifb: scrolling: redraw Apr 17 23:37:31.132437 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 17 23:37:31.132448 kernel: Console: switching to colour frame buffer device 128x48 Apr 17 23:37:31.132458 kernel: fb0: EFI VGA frame buffer device Apr 17 23:37:31.132467 kernel: pstore: Using crash dump compression: deflate Apr 17 23:37:31.132480 kernel: pstore: Registered efi_pstore as persistent store backend Apr 17 23:37:31.132488 kernel: NET: Registered PF_INET6 protocol family Apr 17 23:37:31.132499 kernel: Segment Routing with IPv6 Apr 17 23:37:31.132510 kernel: In-situ OAM (IOAM) with IPv6 Apr 17 23:37:31.132519 kernel: NET: Registered PF_PACKET protocol family Apr 17 23:37:31.132531 kernel: Key type dns_resolver registered Apr 17 23:37:31.132539 kernel: IPI shorthand broadcast: enabled Apr 17 23:37:31.132552 kernel: sched_clock: Marking stable (907002900, 49801000)->(1197654400, -240850500) Apr 17 23:37:31.132560 kernel: registered taskstats version 1 Apr 17 23:37:31.132569 kernel: Loading compiled-in X.509 certificates Apr 17 23:37:31.132581 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 39e9969c7f49062f0fc1d1fb72e8f874436eb94f' Apr 17 23:37:31.132589 kernel: Key type .fscrypt registered Apr 17 23:37:31.132610 kernel: Key type fscrypt-provisioning registered Apr 17 23:37:31.132619 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 17 23:37:31.132631 kernel: ima: Allocated hash algorithm: sha1 Apr 17 23:37:31.132639 kernel: ima: No architecture policies found Apr 17 23:37:31.132650 kernel: clk: Disabling unused clocks Apr 17 23:37:31.132660 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 17 23:37:31.132668 kernel: Write protecting the kernel read-only data: 36864k Apr 17 23:37:31.132681 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 17 23:37:31.132689 kernel: Run /init as init process Apr 17 23:37:31.132704 kernel: with arguments: Apr 17 23:37:31.132712 kernel: /init Apr 17 23:37:31.132721 kernel: with environment: Apr 17 23:37:31.132733 kernel: HOME=/ Apr 17 23:37:31.132745 kernel: TERM=linux Apr 17 23:37:31.132756 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:37:31.132771 systemd[1]: Detected virtualization microsoft. Apr 17 23:37:31.132780 systemd[1]: Detected architecture x86-64. Apr 17 23:37:31.132790 systemd[1]: Running in initrd. Apr 17 23:37:31.132798 systemd[1]: No hostname configured, using default hostname. Apr 17 23:37:31.132807 systemd[1]: Hostname set to . Apr 17 23:37:31.132820 systemd[1]: Initializing machine ID from random generator. Apr 17 23:37:31.132828 systemd[1]: Queued start job for default target initrd.target. Apr 17 23:37:31.132841 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:37:31.132850 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:37:31.132859 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 17 23:37:31.132873 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:37:31.132883 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 17 23:37:31.132895 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 17 23:37:31.132906 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 17 23:37:31.132920 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 17 23:37:31.132932 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:37:31.132943 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:37:31.132958 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:37:31.132973 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:37:31.132987 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:37:31.133003 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:37:31.133019 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:37:31.133035 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:37:31.133051 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 23:37:31.133067 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 17 23:37:31.133083 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:37:31.133103 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:37:31.133118 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:37:31.133135 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:37:31.133151 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 17 23:37:31.133167 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:37:31.133183 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 17 23:37:31.133199 systemd[1]: Starting systemd-fsck-usr.service... Apr 17 23:37:31.133215 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:37:31.133235 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:37:31.133250 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:37:31.133293 systemd-journald[177]: Collecting audit messages is disabled. Apr 17 23:37:31.133328 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 17 23:37:31.133348 systemd-journald[177]: Journal started Apr 17 23:37:31.133380 systemd-journald[177]: Runtime Journal (/run/log/journal/c8bc332d0da44c7e8ac8cf5a39e1ead4) is 8.0M, max 158.7M, 150.7M free. Apr 17 23:37:31.122120 systemd-modules-load[178]: Inserted module 'overlay' Apr 17 23:37:31.144556 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:37:31.154929 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:37:31.163361 systemd[1]: Finished systemd-fsck-usr.service. Apr 17 23:37:31.173827 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:37:31.183505 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 17 23:37:31.183559 kernel: Bridge firewalling registered Apr 17 23:37:31.183479 systemd-modules-load[178]: Inserted module 'br_netfilter' Apr 17 23:37:31.184944 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:37:31.196765 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:37:31.215786 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:37:31.220800 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:37:31.242782 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:37:31.247446 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:37:31.260847 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:37:31.265223 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:37:31.275940 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:37:31.287871 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 17 23:37:31.297237 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:37:31.307736 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:37:31.312402 dracut-cmdline[207]: dracut-dracut-053 Apr 17 23:37:31.314916 dracut-cmdline[207]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:37:31.358406 systemd-resolved[211]: Positive Trust Anchors: Apr 17 23:37:31.358957 systemd-resolved[211]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:37:31.358997 systemd-resolved[211]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:37:31.364012 systemd-resolved[211]: Defaulting to hostname 'linux'. Apr 17 23:37:31.365004 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:37:31.389714 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:37:31.415158 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:37:31.428621 kernel: SCSI subsystem initialized Apr 17 23:37:31.438618 kernel: Loading iSCSI transport class v2.0-870. Apr 17 23:37:31.449621 kernel: iscsi: registered transport (tcp) Apr 17 23:37:31.471586 kernel: iscsi: registered transport (qla4xxx) Apr 17 23:37:31.471657 kernel: QLogic iSCSI HBA Driver Apr 17 23:37:31.507580 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 17 23:37:31.519867 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 17 23:37:31.548622 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 17 23:37:31.548710 kernel: device-mapper: uevent: version 1.0.3 Apr 17 23:37:31.552091 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 17 23:37:31.594628 kernel: raid6: avx512x4 gen() 18457 MB/s Apr 17 23:37:31.612619 kernel: raid6: avx512x2 gen() 18145 MB/s Apr 17 23:37:31.631610 kernel: raid6: avx512x1 gen() 18246 MB/s Apr 17 23:37:31.650617 kernel: raid6: avx2x4 gen() 18279 MB/s Apr 17 23:37:31.668614 kernel: raid6: avx2x2 gen() 18508 MB/s Apr 17 23:37:31.688936 kernel: raid6: avx2x1 gen() 13936 MB/s Apr 17 23:37:31.688973 kernel: raid6: using algorithm avx2x2 gen() 18508 MB/s Apr 17 23:37:31.710070 kernel: raid6: .... xor() 21398 MB/s, rmw enabled Apr 17 23:37:31.710103 kernel: raid6: using avx512x2 recovery algorithm Apr 17 23:37:31.731627 kernel: xor: automatically using best checksumming function avx Apr 17 23:37:31.880625 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 17 23:37:31.889739 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:37:31.901804 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:37:31.916960 systemd-udevd[397]: Using default interface naming scheme 'v255'. Apr 17 23:37:31.921618 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:37:31.936784 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 17 23:37:31.950032 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Apr 17 23:37:31.978077 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:37:31.986865 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:37:32.031860 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:37:32.047280 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 17 23:37:32.081586 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 17 23:37:32.089834 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:37:32.093714 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:37:32.101093 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:37:32.118857 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 17 23:37:32.140684 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:37:32.149196 kernel: cryptd: max_cpu_qlen set to 1000 Apr 17 23:37:32.154326 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:37:32.154464 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:37:32.158451 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:37:32.162009 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:37:32.162319 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:37:32.183366 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:37:32.202434 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:37:32.207221 kernel: AVX2 version of gcm_enc/dec engaged. Apr 17 23:37:32.207248 kernel: AES CTR mode by8 optimization enabled Apr 17 23:37:32.215680 kernel: hv_vmbus: Vmbus version:5.2 Apr 17 23:37:32.231078 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:37:32.232568 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:37:32.248878 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:37:32.259108 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 17 23:37:32.265204 kernel: hv_vmbus: registering driver hyperv_keyboard Apr 17 23:37:32.274575 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Apr 17 23:37:32.274640 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 17 23:37:32.278015 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 17 23:37:32.290618 kernel: hv_vmbus: registering driver hid_hyperv Apr 17 23:37:32.291741 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:37:32.311676 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Apr 17 23:37:32.311729 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Apr 17 23:37:32.312110 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:37:32.323617 kernel: hv_vmbus: registering driver hv_netvsc Apr 17 23:37:32.330854 kernel: hv_vmbus: registering driver hv_storvsc Apr 17 23:37:32.330899 kernel: PTP clock support registered Apr 17 23:37:32.339558 kernel: hv_utils: Registering HyperV Utility Driver Apr 17 23:37:32.339617 kernel: hv_vmbus: registering driver hv_utils Apr 17 23:37:32.342328 kernel: hv_utils: Heartbeat IC version 3.0 Apr 17 23:37:32.344299 kernel: hv_utils: Shutdown IC version 3.2 Apr 17 23:37:32.346392 kernel: hv_utils: TimeSync IC version 4.0 Apr 17 23:37:33.154009 systemd-resolved[211]: Clock change detected. Flushing caches. Apr 17 23:37:33.160681 kernel: scsi host1: storvsc_host_t Apr 17 23:37:33.163670 kernel: scsi host0: storvsc_host_t Apr 17 23:37:33.167665 kernel: scsi 1:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Apr 17 23:37:33.168808 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:37:33.177026 kernel: scsi 1:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Apr 17 23:37:33.194537 kernel: sr 1:0:0:2: [sr0] scsi-1 drive Apr 17 23:37:33.194812 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 17 23:37:33.197721 kernel: sr 1:0:0:2: Attached scsi CD-ROM sr0 Apr 17 23:37:33.214549 kernel: sd 1:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Apr 17 23:37:33.214839 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Apr 17 23:37:33.217103 kernel: sd 1:0:0:0: [sda] Write Protect is off Apr 17 23:37:33.222508 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#276 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 17 23:37:33.222839 kernel: sd 1:0:0:0: [sda] Mode Sense: 0f 00 10 00 Apr 17 23:37:33.222986 kernel: sd 1:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Apr 17 23:37:33.241046 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:37:33.241112 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Apr 17 23:37:33.247690 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#304 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 17 23:37:33.369620 kernel: hv_netvsc 7ced8d75-dd31-7ced-8d75-dd317ced8d75 eth0: VF slot 1 added Apr 17 23:37:33.379672 kernel: hv_vmbus: registering driver hv_pci Apr 17 23:37:33.379726 kernel: hv_pci 4b8d603c-4603-4562-aae4-e66c11de137d: PCI VMBus probing: Using version 0x10004 Apr 17 23:37:33.388889 kernel: hv_pci 4b8d603c-4603-4562-aae4-e66c11de137d: PCI host bridge to bus 4603:00 Apr 17 23:37:33.389138 kernel: pci_bus 4603:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Apr 17 23:37:33.392732 kernel: pci_bus 4603:00: No busn resource found for root bus, will use [bus 00-ff] Apr 17 23:37:33.397749 kernel: pci 4603:00:02.0: [15b3:1016] type 00 class 0x020000 Apr 17 23:37:33.401850 kernel: pci 4603:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Apr 17 23:37:33.405877 kernel: pci 4603:00:02.0: enabling Extended Tags Apr 17 23:37:33.416761 kernel: pci 4603:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 4603:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Apr 17 23:37:33.422989 kernel: pci_bus 4603:00: busn_res: [bus 00-ff] end is updated to 00 Apr 17 23:37:33.423184 kernel: pci 4603:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Apr 17 23:37:33.607532 kernel: mlx5_core 4603:00:02.0: enabling device (0000 -> 0002) Apr 17 23:37:33.611673 kernel: mlx5_core 4603:00:02.0: firmware version: 14.30.5026 Apr 17 23:37:33.732459 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Apr 17 23:37:33.741200 kernel: BTRFS: device fsid 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 devid 1 transid 32 /dev/sda3 scanned by (udev-worker) (467) Apr 17 23:37:33.745668 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (449) Apr 17 23:37:33.781545 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Apr 17 23:37:33.798854 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 17 23:37:33.808140 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Apr 17 23:37:33.812324 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Apr 17 23:37:33.827862 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 17 23:37:33.844682 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:37:33.854672 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:37:33.879897 kernel: hv_netvsc 7ced8d75-dd31-7ced-8d75-dd317ced8d75 eth0: VF registering: eth1 Apr 17 23:37:33.880174 kernel: mlx5_core 4603:00:02.0 eth1: joined to eth0 Apr 17 23:37:33.888627 kernel: mlx5_core 4603:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Apr 17 23:37:33.896692 kernel: mlx5_core 4603:00:02.0 enP17923s1: renamed from eth1 Apr 17 23:37:34.868611 disk-uuid[608]: The operation has completed successfully. Apr 17 23:37:34.871806 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:37:34.965479 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 17 23:37:34.965597 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 17 23:37:34.981882 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 17 23:37:34.985736 sh[722]: Success Apr 17 23:37:35.015698 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 17 23:37:35.283134 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 17 23:37:35.295769 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 17 23:37:35.300108 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 17 23:37:35.332085 kernel: BTRFS info (device dm-0): first mount of filesystem 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 Apr 17 23:37:35.332143 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:37:35.336018 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 17 23:37:35.338621 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 17 23:37:35.340960 kernel: BTRFS info (device dm-0): using free space tree Apr 17 23:37:35.783901 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 17 23:37:35.790410 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 17 23:37:35.800833 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 17 23:37:35.804827 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 17 23:37:35.829679 kernel: BTRFS info (device sda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:37:35.829733 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:37:35.834249 kernel: BTRFS info (device sda6): using free space tree Apr 17 23:37:35.871679 kernel: BTRFS info (device sda6): auto enabling async discard Apr 17 23:37:35.882639 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 17 23:37:35.888797 kernel: BTRFS info (device sda6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:37:35.898262 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 17 23:37:35.908943 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 17 23:37:35.929030 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:37:35.942813 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:37:35.970753 systemd-networkd[906]: lo: Link UP Apr 17 23:37:35.970763 systemd-networkd[906]: lo: Gained carrier Apr 17 23:37:35.973127 systemd-networkd[906]: Enumeration completed Apr 17 23:37:35.973228 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:37:35.974294 systemd-networkd[906]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:37:35.974299 systemd-networkd[906]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:37:35.976983 systemd[1]: Reached target network.target - Network. Apr 17 23:37:36.046705 kernel: mlx5_core 4603:00:02.0 enP17923s1: Link up Apr 17 23:37:36.091757 kernel: hv_netvsc 7ced8d75-dd31-7ced-8d75-dd317ced8d75 eth0: Data path switched to VF: enP17923s1 Apr 17 23:37:36.091903 systemd-networkd[906]: enP17923s1: Link UP Apr 17 23:37:36.092028 systemd-networkd[906]: eth0: Link UP Apr 17 23:37:36.094392 systemd-networkd[906]: eth0: Gained carrier Apr 17 23:37:36.094407 systemd-networkd[906]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:37:36.105875 systemd-networkd[906]: enP17923s1: Gained carrier Apr 17 23:37:36.142721 systemd-networkd[906]: eth0: DHCPv4 address 10.0.0.22/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 17 23:37:36.997648 ignition[883]: Ignition 2.19.0 Apr 17 23:37:36.997678 ignition[883]: Stage: fetch-offline Apr 17 23:37:36.999546 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:37:36.997725 ignition[883]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:37:37.007941 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 17 23:37:36.997735 ignition[883]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 17 23:37:36.997840 ignition[883]: parsed url from cmdline: "" Apr 17 23:37:36.997845 ignition[883]: no config URL provided Apr 17 23:37:36.997851 ignition[883]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:37:36.997863 ignition[883]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:37:36.997869 ignition[883]: failed to fetch config: resource requires networking Apr 17 23:37:36.998124 ignition[883]: Ignition finished successfully Apr 17 23:37:37.028509 ignition[914]: Ignition 2.19.0 Apr 17 23:37:37.028516 ignition[914]: Stage: fetch Apr 17 23:37:37.028781 ignition[914]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:37:37.028791 ignition[914]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 17 23:37:37.028915 ignition[914]: parsed url from cmdline: "" Apr 17 23:37:37.028919 ignition[914]: no config URL provided Apr 17 23:37:37.028926 ignition[914]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:37:37.028933 ignition[914]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:37:37.028959 ignition[914]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Apr 17 23:37:37.137473 ignition[914]: GET result: OK Apr 17 23:37:37.137560 ignition[914]: config has been read from IMDS userdata Apr 17 23:37:37.137592 ignition[914]: parsing config with SHA512: d88bc83b8484280f4b694fa2c26d0c94824f6642a925ae2b7d591f61eb4a30ad77567f4db95b30c1a24acf8de56fadf36864098a173b02f6d587ad3b71b0051e Apr 17 23:37:37.141989 unknown[914]: fetched base config from "system" Apr 17 23:37:37.142468 ignition[914]: fetch: fetch complete Apr 17 23:37:37.142000 unknown[914]: fetched base config from "system" Apr 17 23:37:37.142475 ignition[914]: fetch: fetch passed Apr 17 23:37:37.142005 unknown[914]: fetched user config from "azure" Apr 17 23:37:37.142533 ignition[914]: Ignition finished successfully Apr 17 23:37:37.144174 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 17 23:37:37.163917 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 17 23:37:37.182875 ignition[920]: Ignition 2.19.0 Apr 17 23:37:37.182888 ignition[920]: Stage: kargs Apr 17 23:37:37.183130 ignition[920]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:37:37.187241 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 17 23:37:37.183146 ignition[920]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 17 23:37:37.184070 ignition[920]: kargs: kargs passed Apr 17 23:37:37.184129 ignition[920]: Ignition finished successfully Apr 17 23:37:37.208890 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 17 23:37:37.227246 ignition[926]: Ignition 2.19.0 Apr 17 23:37:37.227259 ignition[926]: Stage: disks Apr 17 23:37:37.227496 ignition[926]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:37:37.230950 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 17 23:37:37.227510 ignition[926]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 17 23:37:37.234917 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 17 23:37:37.228442 ignition[926]: disks: disks passed Apr 17 23:37:37.228493 ignition[926]: Ignition finished successfully Apr 17 23:37:37.253270 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 23:37:37.256838 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:37:37.267140 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:37:37.267261 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:37:37.281979 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 17 23:37:37.360471 systemd-fsck[934]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Apr 17 23:37:37.365739 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 17 23:37:37.377865 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 17 23:37:37.472687 kernel: EXT4-fs (sda9): mounted filesystem d3c199f8-8065-4f33-a75b-da2f09d4fc39 r/w with ordered data mode. Quota mode: none. Apr 17 23:37:37.473386 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 17 23:37:37.476443 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 17 23:37:37.516821 systemd-networkd[906]: eth0: Gained IPv6LL Apr 17 23:37:37.529842 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:37:37.547676 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (945) Apr 17 23:37:37.555140 kernel: BTRFS info (device sda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:37:37.555199 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:37:37.558662 kernel: BTRFS info (device sda6): using free space tree Apr 17 23:37:37.562842 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 17 23:37:37.569799 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 17 23:37:37.576406 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 17 23:37:37.584238 kernel: BTRFS info (device sda6): auto enabling async discard Apr 17 23:37:37.583325 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:37:37.593308 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:37:37.593513 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 17 23:37:37.602900 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 17 23:37:38.376693 coreos-metadata[960]: Apr 17 23:37:38.376 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 17 23:37:38.383745 coreos-metadata[960]: Apr 17 23:37:38.383 INFO Fetch successful Apr 17 23:37:38.386762 coreos-metadata[960]: Apr 17 23:37:38.386 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Apr 17 23:37:38.403189 coreos-metadata[960]: Apr 17 23:37:38.403 INFO Fetch successful Apr 17 23:37:38.420732 coreos-metadata[960]: Apr 17 23:37:38.420 INFO wrote hostname ci-4081.3.6-n-b8c45c9493 to /sysroot/etc/hostname Apr 17 23:37:38.427698 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 17 23:37:38.437936 initrd-setup-root[976]: cut: /sysroot/etc/passwd: No such file or directory Apr 17 23:37:38.466518 initrd-setup-root[983]: cut: /sysroot/etc/group: No such file or directory Apr 17 23:37:38.473711 initrd-setup-root[990]: cut: /sysroot/etc/shadow: No such file or directory Apr 17 23:37:38.484599 initrd-setup-root[997]: cut: /sysroot/etc/gshadow: No such file or directory Apr 17 23:37:39.542202 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 17 23:37:39.553828 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 17 23:37:39.559825 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 17 23:37:39.570511 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 17 23:37:39.580044 kernel: BTRFS info (device sda6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:37:39.601737 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 17 23:37:39.612054 ignition[1065]: INFO : Ignition 2.19.0 Apr 17 23:37:39.612054 ignition[1065]: INFO : Stage: mount Apr 17 23:37:39.616637 ignition[1065]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:37:39.616637 ignition[1065]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 17 23:37:39.616637 ignition[1065]: INFO : mount: mount passed Apr 17 23:37:39.616637 ignition[1065]: INFO : Ignition finished successfully Apr 17 23:37:39.614661 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 17 23:37:39.638769 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 17 23:37:39.651297 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:37:39.674679 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1076) Apr 17 23:37:39.682326 kernel: BTRFS info (device sda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:37:39.682374 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:37:39.684767 kernel: BTRFS info (device sda6): using free space tree Apr 17 23:37:39.697684 kernel: BTRFS info (device sda6): auto enabling async discard Apr 17 23:37:39.699155 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:37:39.721232 ignition[1093]: INFO : Ignition 2.19.0 Apr 17 23:37:39.721232 ignition[1093]: INFO : Stage: files Apr 17 23:37:39.726378 ignition[1093]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:37:39.726378 ignition[1093]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 17 23:37:39.726378 ignition[1093]: DEBUG : files: compiled without relabeling support, skipping Apr 17 23:37:39.737363 ignition[1093]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 17 23:37:39.737363 ignition[1093]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 17 23:37:39.846945 ignition[1093]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 17 23:37:39.851697 ignition[1093]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 17 23:37:39.851697 ignition[1093]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 17 23:37:39.851697 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:37:39.851697 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 17 23:37:39.847430 unknown[1093]: wrote ssh authorized keys file for user: core Apr 17 23:37:39.933600 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 17 23:37:40.619670 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:37:40.625764 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 17 23:37:40.625764 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 17 23:37:40.625764 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:37:40.625764 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:37:40.625764 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:37:40.625764 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:37:40.625764 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:37:40.625764 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:37:40.625764 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:37:40.625764 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:37:40.625764 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:37:40.625764 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:37:40.625764 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:37:40.625764 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 17 23:37:40.984724 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 17 23:37:42.426423 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:37:42.426423 ignition[1093]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 17 23:37:42.454238 ignition[1093]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:37:42.459923 ignition[1093]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:37:42.459923 ignition[1093]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 17 23:37:42.459923 ignition[1093]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 17 23:37:42.479492 ignition[1093]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 17 23:37:42.479492 ignition[1093]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:37:42.479492 ignition[1093]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:37:42.479492 ignition[1093]: INFO : files: files passed Apr 17 23:37:42.479492 ignition[1093]: INFO : Ignition finished successfully Apr 17 23:37:42.469937 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 17 23:37:42.500924 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 17 23:37:42.512120 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 17 23:37:42.521576 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 17 23:37:42.521719 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 17 23:37:42.533370 initrd-setup-root-after-ignition[1126]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:37:42.541734 initrd-setup-root-after-ignition[1122]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:37:42.541734 initrd-setup-root-after-ignition[1122]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:37:42.536933 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:37:42.542150 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 17 23:37:42.564834 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 17 23:37:42.588428 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 17 23:37:42.588523 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 17 23:37:42.601419 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 17 23:37:42.609259 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 17 23:37:42.609417 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 17 23:37:42.619844 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 17 23:37:42.634159 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:37:42.646827 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 17 23:37:42.661335 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:37:42.661593 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:37:42.662760 systemd[1]: Stopped target timers.target - Timer Units. Apr 17 23:37:42.663237 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 17 23:37:42.663398 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:37:42.664242 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 17 23:37:42.664716 systemd[1]: Stopped target basic.target - Basic System. Apr 17 23:37:42.665129 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 17 23:37:42.665628 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:37:42.666105 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 17 23:37:42.666545 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 17 23:37:42.667493 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:37:42.667963 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 17 23:37:42.668478 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 17 23:37:42.668915 systemd[1]: Stopped target swap.target - Swaps. Apr 17 23:37:42.669418 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 17 23:37:42.669563 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:37:42.670365 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:37:42.671229 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:37:42.671755 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 17 23:37:42.710565 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:37:42.717206 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 17 23:37:42.717380 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 17 23:37:42.792983 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 17 23:37:42.796502 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:37:42.796761 systemd[1]: ignition-files.service: Deactivated successfully. Apr 17 23:37:42.796876 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 17 23:37:42.797177 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 17 23:37:42.797284 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 17 23:37:42.819649 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 17 23:37:42.833014 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 17 23:37:42.835800 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 17 23:37:42.836093 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:37:42.841919 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 17 23:37:42.842048 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:37:42.862930 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 17 23:37:42.863251 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 17 23:37:42.872379 ignition[1146]: INFO : Ignition 2.19.0 Apr 17 23:37:42.872379 ignition[1146]: INFO : Stage: umount Apr 17 23:37:42.872379 ignition[1146]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:37:42.872379 ignition[1146]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 17 23:37:42.886889 ignition[1146]: INFO : umount: umount passed Apr 17 23:37:42.886889 ignition[1146]: INFO : Ignition finished successfully Apr 17 23:37:42.877552 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 17 23:37:42.877885 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 17 23:37:42.884057 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 17 23:37:42.884111 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 17 23:37:42.889717 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 17 23:37:42.889775 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 17 23:37:42.894905 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 17 23:37:42.894958 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 17 23:37:42.900744 systemd[1]: Stopped target network.target - Network. Apr 17 23:37:42.905561 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 17 23:37:42.905620 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:37:42.911630 systemd[1]: Stopped target paths.target - Path Units. Apr 17 23:37:42.914125 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 17 23:37:42.919499 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:37:42.925550 systemd[1]: Stopped target slices.target - Slice Units. Apr 17 23:37:42.928445 systemd[1]: Stopped target sockets.target - Socket Units. Apr 17 23:37:42.931270 systemd[1]: iscsid.socket: Deactivated successfully. Apr 17 23:37:42.931322 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:37:42.936747 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 17 23:37:42.936795 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:37:42.942376 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 17 23:37:42.942429 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 17 23:37:42.945280 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 17 23:37:42.945327 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 17 23:37:42.952451 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 17 23:37:42.957992 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 17 23:37:42.979443 systemd-networkd[906]: eth0: DHCPv6 lease lost Apr 17 23:37:42.982881 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 17 23:37:42.982986 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 17 23:37:42.999018 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 17 23:37:42.999195 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 17 23:37:43.018592 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 17 23:37:43.018695 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:37:43.056762 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 17 23:37:43.059792 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 17 23:37:43.059875 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:37:43.066160 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 23:37:43.069308 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:37:43.075946 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 17 23:37:43.076004 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 17 23:37:43.084970 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 17 23:37:43.087868 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:37:43.105995 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:37:43.125268 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 17 23:37:43.128755 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:37:43.136787 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 17 23:37:43.136852 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 17 23:37:43.145210 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 17 23:37:43.145266 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:37:43.154048 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 17 23:37:43.154120 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:37:43.159411 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 17 23:37:43.159452 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 17 23:37:43.162627 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:37:43.162687 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:37:43.185343 kernel: hv_netvsc 7ced8d75-dd31-7ced-8d75-dd317ced8d75 eth0: Data path switched from VF: enP17923s1 Apr 17 23:37:43.186900 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 17 23:37:43.190125 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 17 23:37:43.190204 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:37:43.193879 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 17 23:37:43.193922 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:37:43.201027 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 17 23:37:43.201082 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:37:43.204535 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:37:43.204585 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:37:43.205029 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 17 23:37:43.205141 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 17 23:37:43.214555 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 17 23:37:43.214649 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 17 23:37:43.560807 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 17 23:37:43.611599 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 17 23:37:43.611739 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 17 23:37:43.617856 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 17 23:37:43.623161 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 17 23:37:43.623246 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 17 23:37:43.637903 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 17 23:37:43.647196 systemd[1]: Switching root. Apr 17 23:37:43.720910 systemd-journald[177]: Journal stopped Apr 17 23:37:48.292344 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Apr 17 23:37:48.292371 kernel: SELinux: policy capability network_peer_controls=1 Apr 17 23:37:48.292388 kernel: SELinux: policy capability open_perms=1 Apr 17 23:37:48.292401 kernel: SELinux: policy capability extended_socket_class=1 Apr 17 23:37:48.292409 kernel: SELinux: policy capability always_check_network=0 Apr 17 23:37:48.292417 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 17 23:37:48.292431 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 17 23:37:48.292440 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 17 23:37:48.292455 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 17 23:37:48.292464 kernel: audit: type=1403 audit(1776469064.890:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 17 23:37:48.292473 systemd[1]: Successfully loaded SELinux policy in 144.107ms. Apr 17 23:37:48.292487 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.804ms. Apr 17 23:37:48.292497 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:37:48.292511 systemd[1]: Detected virtualization microsoft. Apr 17 23:37:48.292524 systemd[1]: Detected architecture x86-64. Apr 17 23:37:48.292538 systemd[1]: Detected first boot. Apr 17 23:37:48.292548 systemd[1]: Hostname set to . Apr 17 23:37:48.292561 systemd[1]: Initializing machine ID from random generator. Apr 17 23:37:48.292571 zram_generator::config[1188]: No configuration found. Apr 17 23:37:48.292588 systemd[1]: Populated /etc with preset unit settings. Apr 17 23:37:48.292598 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 17 23:37:48.292609 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 17 23:37:48.292622 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 17 23:37:48.292633 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 17 23:37:48.292647 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 17 23:37:48.292672 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 17 23:37:48.292685 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 17 23:37:48.292699 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 17 23:37:48.292709 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 17 23:37:48.292724 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 17 23:37:48.292733 systemd[1]: Created slice user.slice - User and Session Slice. Apr 17 23:37:48.292747 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:37:48.292758 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:37:48.292772 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 17 23:37:48.292784 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 17 23:37:48.292798 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 17 23:37:48.292809 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:37:48.292820 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 17 23:37:48.292833 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:37:48.292847 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 17 23:37:48.292861 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 17 23:37:48.292875 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 17 23:37:48.292889 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 17 23:37:48.292903 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:37:48.292918 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:37:48.292928 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:37:48.292938 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:37:48.292952 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 17 23:37:48.292963 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 17 23:37:48.292977 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:37:48.292990 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:37:48.293005 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:37:48.293015 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 17 23:37:48.293030 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 17 23:37:48.293041 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 17 23:37:48.293057 systemd[1]: Mounting media.mount - External Media Directory... Apr 17 23:37:48.293068 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:37:48.293082 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 17 23:37:48.293093 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 17 23:37:48.293106 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 17 23:37:48.293118 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 17 23:37:48.293128 systemd[1]: Reached target machines.target - Containers. Apr 17 23:37:48.293142 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 17 23:37:48.293158 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:37:48.293171 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:37:48.293181 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 17 23:37:48.293196 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:37:48.293206 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:37:48.293220 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:37:48.293231 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 17 23:37:48.293245 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:37:48.293258 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 17 23:37:48.293273 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 17 23:37:48.293287 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 17 23:37:48.293298 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 17 23:37:48.293311 systemd[1]: Stopped systemd-fsck-usr.service. Apr 17 23:37:48.293322 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:37:48.293335 kernel: loop: module loaded Apr 17 23:37:48.293346 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:37:48.293357 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 17 23:37:48.293373 kernel: fuse: init (API version 7.39) Apr 17 23:37:48.293385 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 17 23:37:48.293397 kernel: ACPI: bus type drm_connector registered Apr 17 23:37:48.293406 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:37:48.293422 systemd[1]: verity-setup.service: Deactivated successfully. Apr 17 23:37:48.293432 systemd[1]: Stopped verity-setup.service. Apr 17 23:37:48.293460 systemd-journald[1280]: Collecting audit messages is disabled. Apr 17 23:37:48.293488 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:37:48.293503 systemd-journald[1280]: Journal started Apr 17 23:37:48.293528 systemd-journald[1280]: Runtime Journal (/run/log/journal/30315476a45b4b2189d6152ab34940a1) is 8.0M, max 158.7M, 150.7M free. Apr 17 23:37:47.562581 systemd[1]: Queued start job for default target multi-user.target. Apr 17 23:37:47.697789 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 17 23:37:47.698167 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 17 23:37:48.305614 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:37:48.306348 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 17 23:37:48.309957 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 17 23:37:48.313467 systemd[1]: Mounted media.mount - External Media Directory. Apr 17 23:37:48.316746 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 17 23:37:48.320203 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 17 23:37:48.323515 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 17 23:37:48.326910 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 17 23:37:48.330945 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:37:48.334902 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 17 23:37:48.335094 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 17 23:37:48.339128 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:37:48.339294 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:37:48.344207 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:37:48.344337 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:37:48.348946 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:37:48.349104 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:37:48.352983 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 17 23:37:48.353143 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 17 23:37:48.356638 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:37:48.358886 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:37:48.362579 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:37:48.366573 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 17 23:37:48.374279 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 17 23:37:48.401015 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 17 23:37:48.414378 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 17 23:37:48.423718 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 17 23:37:48.427725 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 17 23:37:48.427769 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:37:48.432140 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 17 23:37:48.441649 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 17 23:37:48.450829 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 17 23:37:48.456826 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:37:48.458236 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 17 23:37:48.463864 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 17 23:37:48.467521 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:37:48.473880 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 17 23:37:48.479821 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:37:48.488791 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:37:48.496042 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 17 23:37:48.505934 systemd-journald[1280]: Time spent on flushing to /var/log/journal/30315476a45b4b2189d6152ab34940a1 is 24.819ms for 952 entries. Apr 17 23:37:48.505934 systemd-journald[1280]: System Journal (/var/log/journal/30315476a45b4b2189d6152ab34940a1) is 8.0M, max 2.6G, 2.6G free. Apr 17 23:37:48.553039 systemd-journald[1280]: Received client request to flush runtime journal. Apr 17 23:37:48.512878 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:37:48.518911 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:37:48.524090 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 17 23:37:48.529109 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 17 23:37:48.537956 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 17 23:37:48.543078 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 17 23:37:48.556447 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 17 23:37:48.562975 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 17 23:37:48.574373 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 17 23:37:48.580255 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 17 23:37:48.612340 udevadm[1336]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 17 23:37:48.624478 kernel: loop0: detected capacity change from 0 to 31056 Apr 17 23:37:48.634753 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:37:48.647091 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 17 23:37:48.649756 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 17 23:37:48.691170 systemd-tmpfiles[1326]: ACLs are not supported, ignoring. Apr 17 23:37:48.691198 systemd-tmpfiles[1326]: ACLs are not supported, ignoring. Apr 17 23:37:48.698815 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:37:48.711800 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 17 23:37:48.807066 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 17 23:37:48.816844 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:37:48.833056 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. Apr 17 23:37:48.833083 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. Apr 17 23:37:48.838539 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:37:49.122689 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 17 23:37:49.181684 kernel: loop1: detected capacity change from 0 to 142488 Apr 17 23:37:49.481421 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 17 23:37:49.492824 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:37:49.515811 systemd-udevd[1351]: Using default interface naming scheme 'v255'. Apr 17 23:37:49.677071 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:37:49.689805 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:37:49.719284 kernel: loop2: detected capacity change from 0 to 140768 Apr 17 23:37:49.755111 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 17 23:37:49.838362 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 17 23:37:49.858161 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 17 23:37:49.888037 kernel: hv_vmbus: registering driver hv_balloon Apr 17 23:37:49.888129 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Apr 17 23:37:49.918709 kernel: mousedev: PS/2 mouse device common for all mice Apr 17 23:37:49.918795 kernel: hv_vmbus: registering driver hyperv_fb Apr 17 23:37:49.924888 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Apr 17 23:37:49.928548 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Apr 17 23:37:49.934371 kernel: Console: switching to colour dummy device 80x25 Apr 17 23:37:49.940703 kernel: Console: switching to colour frame buffer device 128x48 Apr 17 23:37:49.941672 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 17 23:37:50.145796 systemd-networkd[1356]: lo: Link UP Apr 17 23:37:50.145811 systemd-networkd[1356]: lo: Gained carrier Apr 17 23:37:50.152278 systemd-networkd[1356]: Enumeration completed Apr 17 23:37:50.152435 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:37:50.153151 systemd-networkd[1356]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:37:50.153159 systemd-networkd[1356]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:37:50.179212 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 17 23:37:50.185851 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:37:50.243852 kernel: mlx5_core 4603:00:02.0 enP17923s1: Link up Apr 17 23:37:50.244486 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:37:50.244839 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:37:50.259277 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:37:50.284698 kernel: hv_netvsc 7ced8d75-dd31-7ced-8d75-dd317ced8d75 eth0: Data path switched to VF: enP17923s1 Apr 17 23:37:50.293323 systemd-networkd[1356]: enP17923s1: Link UP Apr 17 23:37:50.293640 systemd-networkd[1356]: eth0: Link UP Apr 17 23:37:50.293669 systemd-networkd[1356]: eth0: Gained carrier Apr 17 23:37:50.302504 systemd-networkd[1356]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:37:50.308811 systemd-networkd[1356]: enP17923s1: Gained carrier Apr 17 23:37:50.323169 kernel: loop3: detected capacity change from 0 to 228704 Apr 17 23:37:50.331709 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (1355) Apr 17 23:37:50.338784 systemd-networkd[1356]: eth0: DHCPv4 address 10.0.0.22/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 17 23:37:50.383177 kernel: loop4: detected capacity change from 0 to 31056 Apr 17 23:37:50.386709 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Apr 17 23:37:50.415841 kernel: loop5: detected capacity change from 0 to 142488 Apr 17 23:37:50.433245 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 17 23:37:50.468679 kernel: loop6: detected capacity change from 0 to 140768 Apr 17 23:37:50.476848 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 17 23:37:50.515717 kernel: loop7: detected capacity change from 0 to 228704 Apr 17 23:37:50.542726 (sd-merge)[1432]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Apr 17 23:37:50.543333 (sd-merge)[1432]: Merged extensions into '/usr'. Apr 17 23:37:50.548958 systemd[1]: Reloading requested from client PID 1324 ('systemd-sysext') (unit systemd-sysext.service)... Apr 17 23:37:50.548976 systemd[1]: Reloading... Apr 17 23:37:50.627684 zram_generator::config[1477]: No configuration found. Apr 17 23:37:50.785039 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:37:50.860236 systemd[1]: Reloading finished in 310 ms. Apr 17 23:37:50.892579 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:37:50.897199 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 17 23:37:50.902017 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 17 23:37:50.915866 systemd[1]: Starting ensure-sysext.service... Apr 17 23:37:50.920839 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:37:50.937722 systemd[1]: Reloading requested from client PID 1539 ('systemctl') (unit ensure-sysext.service)... Apr 17 23:37:50.937746 systemd[1]: Reloading... Apr 17 23:37:50.966596 systemd-tmpfiles[1540]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 17 23:37:50.972883 systemd-tmpfiles[1540]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 17 23:37:50.974575 systemd-tmpfiles[1540]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 17 23:37:50.975443 systemd-tmpfiles[1540]: ACLs are not supported, ignoring. Apr 17 23:37:50.975525 systemd-tmpfiles[1540]: ACLs are not supported, ignoring. Apr 17 23:37:50.987255 systemd-tmpfiles[1540]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:37:50.987268 systemd-tmpfiles[1540]: Skipping /boot Apr 17 23:37:51.003851 systemd-tmpfiles[1540]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:37:51.003868 systemd-tmpfiles[1540]: Skipping /boot Apr 17 23:37:51.051683 zram_generator::config[1570]: No configuration found. Apr 17 23:37:51.183934 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:37:51.260671 systemd[1]: Reloading finished in 322 ms. Apr 17 23:37:51.281617 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 17 23:37:51.294040 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:37:51.313069 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:37:51.331135 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 17 23:37:51.337981 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 17 23:37:51.343894 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 17 23:37:51.364404 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:37:51.370567 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 17 23:37:51.378733 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:37:51.379168 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:37:51.385007 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:37:51.390950 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:37:51.402018 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:37:51.405228 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:37:51.405395 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:37:51.406600 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:37:51.407586 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:37:51.427852 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:37:51.428304 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:37:51.435048 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:37:51.439158 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:37:51.439420 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:37:51.445176 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 17 23:37:51.461230 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 17 23:37:51.468366 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:37:51.469106 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:37:51.479046 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:37:51.483147 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:37:51.483446 systemd[1]: Reached target time-set.target - System Time Set. Apr 17 23:37:51.491473 lvm[1635]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:37:51.493089 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:37:51.494323 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:37:51.495732 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:37:51.500529 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:37:51.500743 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:37:51.511711 systemd[1]: Finished ensure-sysext.service. Apr 17 23:37:51.515042 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:37:51.515176 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:37:51.520930 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:37:51.521113 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:37:51.526204 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 17 23:37:51.534982 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:37:51.536471 systemd-resolved[1642]: Positive Trust Anchors: Apr 17 23:37:51.536484 systemd-resolved[1642]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:37:51.536524 systemd-resolved[1642]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:37:51.543859 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 17 23:37:51.547929 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:37:51.548019 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:37:51.562687 lvm[1666]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:37:51.598587 augenrules[1668]: No rules Apr 17 23:37:51.600420 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:37:51.603846 systemd-resolved[1642]: Using system hostname 'ci-4081.3.6-n-b8c45c9493'. Apr 17 23:37:51.604911 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 17 23:37:51.609225 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:37:51.613593 systemd[1]: Reached target network.target - Network. Apr 17 23:37:51.616434 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:37:51.955124 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 17 23:37:51.966899 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 17 23:37:51.980822 systemd-networkd[1356]: eth0: Gained IPv6LL Apr 17 23:37:51.983792 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 17 23:37:51.987906 systemd[1]: Reached target network-online.target - Network is Online. Apr 17 23:37:55.946402 ldconfig[1319]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 17 23:37:55.958676 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 17 23:37:55.967842 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 17 23:37:55.981564 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 17 23:37:55.985155 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:37:55.990890 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 17 23:37:55.994602 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 17 23:37:55.998491 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 17 23:37:56.001574 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 17 23:37:56.005104 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 17 23:37:56.008600 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 17 23:37:56.008685 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:37:56.011540 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:37:56.015343 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 17 23:37:56.019979 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 17 23:37:56.034765 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 17 23:37:56.038900 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 17 23:37:56.042434 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:37:56.045486 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:37:56.048398 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:37:56.048421 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:37:56.054772 systemd[1]: Starting chronyd.service - NTP client/server... Apr 17 23:37:56.061912 systemd[1]: Starting containerd.service - containerd container runtime... Apr 17 23:37:56.076833 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 17 23:37:56.081867 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 17 23:37:56.088805 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 17 23:37:56.094828 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 17 23:37:56.097671 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 17 23:37:56.098803 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Apr 17 23:37:56.101191 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Apr 17 23:37:56.104468 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Apr 17 23:37:56.107908 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:37:56.114840 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 17 23:37:56.120195 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 17 23:37:56.127071 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 17 23:37:56.138568 jq[1686]: false Apr 17 23:37:56.133782 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 17 23:37:56.140847 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 17 23:37:56.154497 (chronyd)[1682]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Apr 17 23:37:56.155946 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 17 23:37:56.161230 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 17 23:37:56.161858 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 17 23:37:56.162627 systemd[1]: Starting update-engine.service - Update Engine... Apr 17 23:37:56.172886 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 17 23:37:56.180015 KVP[1688]: KVP starting; pid is:1688 Apr 17 23:37:56.188246 chronyd[1705]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Apr 17 23:37:56.193101 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 17 23:37:56.193361 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 17 23:37:56.199134 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 17 23:37:56.199361 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 17 23:37:56.207812 kernel: hv_utils: KVP IC version 4.0 Apr 17 23:37:56.207689 KVP[1688]: KVP LIC Version: 3.1 Apr 17 23:37:56.209792 chronyd[1705]: Timezone right/UTC failed leap second check, ignoring Apr 17 23:37:56.210006 chronyd[1705]: Loaded seccomp filter (level 2) Apr 17 23:37:56.213892 systemd[1]: Started chronyd.service - NTP client/server. Apr 17 23:37:56.218141 systemd[1]: motdgen.service: Deactivated successfully. Apr 17 23:37:56.219718 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 17 23:37:56.228922 jq[1702]: true Apr 17 23:37:56.236401 extend-filesystems[1687]: Found loop4 Apr 17 23:37:56.236401 extend-filesystems[1687]: Found loop5 Apr 17 23:37:56.236401 extend-filesystems[1687]: Found loop6 Apr 17 23:37:56.236401 extend-filesystems[1687]: Found loop7 Apr 17 23:37:56.236401 extend-filesystems[1687]: Found sda Apr 17 23:37:56.236401 extend-filesystems[1687]: Found sda1 Apr 17 23:37:56.236401 extend-filesystems[1687]: Found sda2 Apr 17 23:37:56.236401 extend-filesystems[1687]: Found sda3 Apr 17 23:37:56.236401 extend-filesystems[1687]: Found usr Apr 17 23:37:56.236401 extend-filesystems[1687]: Found sda4 Apr 17 23:37:56.236401 extend-filesystems[1687]: Found sda6 Apr 17 23:37:56.236401 extend-filesystems[1687]: Found sda7 Apr 17 23:37:56.236401 extend-filesystems[1687]: Found sda9 Apr 17 23:37:56.236401 extend-filesystems[1687]: Checking size of /dev/sda9 Apr 17 23:37:56.278744 (ntainerd)[1726]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 17 23:37:56.304067 update_engine[1700]: I20260417 23:37:56.290150 1700 main.cc:92] Flatcar Update Engine starting Apr 17 23:37:56.309835 jq[1718]: true Apr 17 23:37:56.324634 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 17 23:37:56.330840 systemd-logind[1697]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 17 23:37:56.332683 systemd-logind[1697]: New seat seat0. Apr 17 23:37:56.335420 systemd[1]: Started systemd-logind.service - User Login Management. Apr 17 23:37:56.357161 dbus-daemon[1685]: [system] SELinux support is enabled Apr 17 23:37:56.364388 update_engine[1700]: I20260417 23:37:56.361300 1700 update_check_scheduler.cc:74] Next update check in 11m58s Apr 17 23:37:56.357346 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 17 23:37:56.365839 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 17 23:37:56.365874 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 17 23:37:56.372515 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 17 23:37:56.372554 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 17 23:37:56.384265 systemd[1]: Started update-engine.service - Update Engine. Apr 17 23:37:56.400299 extend-filesystems[1687]: Old size kept for /dev/sda9 Apr 17 23:37:56.425825 extend-filesystems[1687]: Found sr0 Apr 17 23:37:56.404448 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 17 23:37:56.411884 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 17 23:37:56.412094 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 17 23:37:56.440344 bash[1748]: Updated "/home/core/.ssh/authorized_keys" Apr 17 23:37:56.442414 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 17 23:37:56.450487 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 17 23:37:56.507745 tar[1712]: linux-amd64/LICENSE Apr 17 23:37:56.507745 tar[1712]: linux-amd64/helm Apr 17 23:37:56.508853 coreos-metadata[1684]: Apr 17 23:37:56.508 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 17 23:37:56.512579 coreos-metadata[1684]: Apr 17 23:37:56.511 INFO Fetch successful Apr 17 23:37:56.512579 coreos-metadata[1684]: Apr 17 23:37:56.511 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Apr 17 23:37:56.515498 coreos-metadata[1684]: Apr 17 23:37:56.515 INFO Fetch successful Apr 17 23:37:56.515618 coreos-metadata[1684]: Apr 17 23:37:56.515 INFO Fetching http://168.63.129.16/machine/dae712c0-b561-4943-8ba6-540d34ff05dc/98bac52b%2Dd880%2D49ab%2D9df1%2D2104c2d77cbb.%5Fci%2D4081.3.6%2Dn%2Db8c45c9493?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Apr 17 23:37:56.517414 coreos-metadata[1684]: Apr 17 23:37:56.517 INFO Fetch successful Apr 17 23:37:56.518130 coreos-metadata[1684]: Apr 17 23:37:56.517 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Apr 17 23:37:56.532450 coreos-metadata[1684]: Apr 17 23:37:56.530 INFO Fetch successful Apr 17 23:37:56.617439 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 17 23:37:56.630094 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (1756) Apr 17 23:37:56.627344 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 17 23:37:56.860879 sshd_keygen[1729]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 17 23:37:56.881502 locksmithd[1762]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 17 23:37:56.906954 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 17 23:37:56.922208 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 17 23:37:56.931007 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Apr 17 23:37:56.954099 systemd[1]: issuegen.service: Deactivated successfully. Apr 17 23:37:56.954489 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 17 23:37:56.968090 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 17 23:37:57.009688 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 17 23:37:57.022815 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 17 23:37:57.028971 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 17 23:37:57.036597 systemd[1]: Reached target getty.target - Login Prompts. Apr 17 23:37:57.046974 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Apr 17 23:37:57.457382 tar[1712]: linux-amd64/README.md Apr 17 23:37:57.472099 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 17 23:37:57.504129 containerd[1726]: time="2026-04-17T23:37:57.503502500Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 17 23:37:57.537989 containerd[1726]: time="2026-04-17T23:37:57.537933700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:37:57.541681 containerd[1726]: time="2026-04-17T23:37:57.540711600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:37:57.541681 containerd[1726]: time="2026-04-17T23:37:57.540772300Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 17 23:37:57.541681 containerd[1726]: time="2026-04-17T23:37:57.540797700Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 17 23:37:57.541681 containerd[1726]: time="2026-04-17T23:37:57.541004800Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 17 23:37:57.541681 containerd[1726]: time="2026-04-17T23:37:57.541030900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 17 23:37:57.541681 containerd[1726]: time="2026-04-17T23:37:57.541111300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:37:57.541681 containerd[1726]: time="2026-04-17T23:37:57.541133300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:37:57.541681 containerd[1726]: time="2026-04-17T23:37:57.541354100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:37:57.541681 containerd[1726]: time="2026-04-17T23:37:57.541379100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 17 23:37:57.541681 containerd[1726]: time="2026-04-17T23:37:57.541403400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:37:57.541681 containerd[1726]: time="2026-04-17T23:37:57.541422500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 17 23:37:57.542127 containerd[1726]: time="2026-04-17T23:37:57.541515900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:37:57.542127 containerd[1726]: time="2026-04-17T23:37:57.541771500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:37:57.542127 containerd[1726]: time="2026-04-17T23:37:57.541942800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:37:57.542127 containerd[1726]: time="2026-04-17T23:37:57.541969200Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 17 23:37:57.542127 containerd[1726]: time="2026-04-17T23:37:57.542082700Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 17 23:37:57.542299 containerd[1726]: time="2026-04-17T23:37:57.542144200Z" level=info msg="metadata content store policy set" policy=shared Apr 17 23:37:57.557728 containerd[1726]: time="2026-04-17T23:37:57.557419000Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 17 23:37:57.557728 containerd[1726]: time="2026-04-17T23:37:57.557487700Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 17 23:37:57.557728 containerd[1726]: time="2026-04-17T23:37:57.557521600Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 17 23:37:57.557728 containerd[1726]: time="2026-04-17T23:37:57.557546700Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 17 23:37:57.557728 containerd[1726]: time="2026-04-17T23:37:57.557567800Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 17 23:37:57.558012 containerd[1726]: time="2026-04-17T23:37:57.557749900Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 17 23:37:57.558119 containerd[1726]: time="2026-04-17T23:37:57.558077200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 17 23:37:57.559517 containerd[1726]: time="2026-04-17T23:37:57.558237300Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 17 23:37:57.559517 containerd[1726]: time="2026-04-17T23:37:57.558266600Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 17 23:37:57.559517 containerd[1726]: time="2026-04-17T23:37:57.558287500Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 17 23:37:57.559517 containerd[1726]: time="2026-04-17T23:37:57.558318700Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 17 23:37:57.559517 containerd[1726]: time="2026-04-17T23:37:57.558338500Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 17 23:37:57.559517 containerd[1726]: time="2026-04-17T23:37:57.558358100Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 17 23:37:57.559517 containerd[1726]: time="2026-04-17T23:37:57.558377200Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 17 23:37:57.559517 containerd[1726]: time="2026-04-17T23:37:57.558395400Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 17 23:37:57.559517 containerd[1726]: time="2026-04-17T23:37:57.558416400Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 17 23:37:57.559517 containerd[1726]: time="2026-04-17T23:37:57.558433400Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 17 23:37:57.559517 containerd[1726]: time="2026-04-17T23:37:57.558450000Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 17 23:37:57.559517 containerd[1726]: time="2026-04-17T23:37:57.558476400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 17 23:37:57.559517 containerd[1726]: time="2026-04-17T23:37:57.558498300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 17 23:37:57.559517 containerd[1726]: time="2026-04-17T23:37:57.558515700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 17 23:37:57.560096 containerd[1726]: time="2026-04-17T23:37:57.558533400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 17 23:37:57.560096 containerd[1726]: time="2026-04-17T23:37:57.558550100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 17 23:37:57.560096 containerd[1726]: time="2026-04-17T23:37:57.558575500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 17 23:37:57.560096 containerd[1726]: time="2026-04-17T23:37:57.558594300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 17 23:37:57.560096 containerd[1726]: time="2026-04-17T23:37:57.558612400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 17 23:37:57.560096 containerd[1726]: time="2026-04-17T23:37:57.558629400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 17 23:37:57.560096 containerd[1726]: time="2026-04-17T23:37:57.558678100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 17 23:37:57.560096 containerd[1726]: time="2026-04-17T23:37:57.558696600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 17 23:37:57.560096 containerd[1726]: time="2026-04-17T23:37:57.558714100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 17 23:37:57.560096 containerd[1726]: time="2026-04-17T23:37:57.558730700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 17 23:37:57.560096 containerd[1726]: time="2026-04-17T23:37:57.558752400Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 17 23:37:57.560096 containerd[1726]: time="2026-04-17T23:37:57.558779900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 17 23:37:57.560096 containerd[1726]: time="2026-04-17T23:37:57.558806400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 17 23:37:57.560096 containerd[1726]: time="2026-04-17T23:37:57.558824700Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 17 23:37:57.560572 containerd[1726]: time="2026-04-17T23:37:57.558883400Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 17 23:37:57.560572 containerd[1726]: time="2026-04-17T23:37:57.558905100Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 17 23:37:57.560572 containerd[1726]: time="2026-04-17T23:37:57.558920200Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 17 23:37:57.560572 containerd[1726]: time="2026-04-17T23:37:57.558937000Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 17 23:37:57.560572 containerd[1726]: time="2026-04-17T23:37:57.558951500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 17 23:37:57.560572 containerd[1726]: time="2026-04-17T23:37:57.558968300Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 17 23:37:57.560572 containerd[1726]: time="2026-04-17T23:37:57.558982200Z" level=info msg="NRI interface is disabled by configuration." Apr 17 23:37:57.560572 containerd[1726]: time="2026-04-17T23:37:57.558996500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 17 23:37:57.560920 containerd[1726]: time="2026-04-17T23:37:57.559372700Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 17 23:37:57.560920 containerd[1726]: time="2026-04-17T23:37:57.559467000Z" level=info msg="Connect containerd service" Apr 17 23:37:57.560920 containerd[1726]: time="2026-04-17T23:37:57.559521200Z" level=info msg="using legacy CRI server" Apr 17 23:37:57.560920 containerd[1726]: time="2026-04-17T23:37:57.559530400Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 17 23:37:57.560920 containerd[1726]: time="2026-04-17T23:37:57.559669200Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 17 23:37:57.566111 containerd[1726]: time="2026-04-17T23:37:57.561955300Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 23:37:57.566111 containerd[1726]: time="2026-04-17T23:37:57.562325700Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 17 23:37:57.566111 containerd[1726]: time="2026-04-17T23:37:57.562375100Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 17 23:37:57.566111 containerd[1726]: time="2026-04-17T23:37:57.562405300Z" level=info msg="Start subscribing containerd event" Apr 17 23:37:57.566111 containerd[1726]: time="2026-04-17T23:37:57.562443200Z" level=info msg="Start recovering state" Apr 17 23:37:57.566111 containerd[1726]: time="2026-04-17T23:37:57.562509600Z" level=info msg="Start event monitor" Apr 17 23:37:57.566111 containerd[1726]: time="2026-04-17T23:37:57.562526600Z" level=info msg="Start snapshots syncer" Apr 17 23:37:57.566111 containerd[1726]: time="2026-04-17T23:37:57.562539000Z" level=info msg="Start cni network conf syncer for default" Apr 17 23:37:57.566111 containerd[1726]: time="2026-04-17T23:37:57.562548700Z" level=info msg="Start streaming server" Apr 17 23:37:57.566111 containerd[1726]: time="2026-04-17T23:37:57.562602000Z" level=info msg="containerd successfully booted in 0.060075s" Apr 17 23:37:57.562874 systemd[1]: Started containerd.service - containerd container runtime. Apr 17 23:37:57.888263 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:37:57.892486 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 17 23:37:57.894942 (kubelet)[1844]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:37:57.900719 systemd[1]: Startup finished in 1.053s (kernel) + 13.231s (initrd) + 13.152s (userspace) = 27.437s. Apr 17 23:37:58.439183 login[1824]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 17 23:37:58.443264 login[1825]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 17 23:37:58.455474 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 17 23:37:58.466605 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 17 23:37:58.471548 systemd-logind[1697]: New session 2 of user core. Apr 17 23:37:58.477016 systemd-logind[1697]: New session 1 of user core. Apr 17 23:37:58.507791 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 17 23:37:58.515283 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 17 23:37:58.531814 (systemd)[1855]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 17 23:37:58.656096 kubelet[1844]: E0417 23:37:58.655606 1844 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:37:58.659280 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:37:58.660256 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:37:58.701118 systemd[1855]: Queued start job for default target default.target. Apr 17 23:37:58.707620 systemd[1855]: Created slice app.slice - User Application Slice. Apr 17 23:37:58.707667 systemd[1855]: Reached target paths.target - Paths. Apr 17 23:37:58.707687 systemd[1855]: Reached target timers.target - Timers. Apr 17 23:37:58.709259 systemd[1855]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 17 23:37:58.727554 systemd[1855]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 17 23:37:58.727725 systemd[1855]: Reached target sockets.target - Sockets. Apr 17 23:37:58.727747 systemd[1855]: Reached target basic.target - Basic System. Apr 17 23:37:58.727796 systemd[1855]: Reached target default.target - Main User Target. Apr 17 23:37:58.727840 systemd[1855]: Startup finished in 187ms. Apr 17 23:37:58.727973 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 17 23:37:58.732849 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 17 23:37:58.734859 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 17 23:37:59.030377 waagent[1826]: 2026-04-17T23:37:59.030211Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Apr 17 23:37:59.034631 waagent[1826]: 2026-04-17T23:37:59.034559Z INFO Daemon Daemon OS: flatcar 4081.3.6 Apr 17 23:37:59.037667 waagent[1826]: 2026-04-17T23:37:59.037604Z INFO Daemon Daemon Python: 3.11.9 Apr 17 23:37:59.040521 waagent[1826]: 2026-04-17T23:37:59.040462Z INFO Daemon Daemon Run daemon Apr 17 23:37:59.043097 waagent[1826]: 2026-04-17T23:37:59.043049Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Apr 17 23:37:59.048735 waagent[1826]: 2026-04-17T23:37:59.048686Z INFO Daemon Daemon Using waagent for provisioning Apr 17 23:37:59.052455 waagent[1826]: 2026-04-17T23:37:59.052398Z INFO Daemon Daemon Activate resource disk Apr 17 23:37:59.055638 waagent[1826]: 2026-04-17T23:37:59.055587Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Apr 17 23:37:59.063990 waagent[1826]: 2026-04-17T23:37:59.063933Z INFO Daemon Daemon Found device: None Apr 17 23:37:59.105267 waagent[1826]: 2026-04-17T23:37:59.064147Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Apr 17 23:37:59.105267 waagent[1826]: 2026-04-17T23:37:59.065282Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Apr 17 23:37:59.105267 waagent[1826]: 2026-04-17T23:37:59.067833Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 17 23:37:59.105267 waagent[1826]: 2026-04-17T23:37:59.068431Z INFO Daemon Daemon Running default provisioning handler Apr 17 23:37:59.105267 waagent[1826]: 2026-04-17T23:37:59.077542Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Apr 17 23:37:59.105267 waagent[1826]: 2026-04-17T23:37:59.079202Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Apr 17 23:37:59.105267 waagent[1826]: 2026-04-17T23:37:59.080229Z INFO Daemon Daemon cloud-init is enabled: False Apr 17 23:37:59.105267 waagent[1826]: 2026-04-17T23:37:59.080863Z INFO Daemon Daemon Copying ovf-env.xml Apr 17 23:37:59.212621 waagent[1826]: 2026-04-17T23:37:59.209901Z INFO Daemon Daemon Successfully mounted dvd Apr 17 23:37:59.256692 waagent[1826]: 2026-04-17T23:37:59.244823Z INFO Daemon Daemon Detect protocol endpoint Apr 17 23:37:59.256692 waagent[1826]: 2026-04-17T23:37:59.245159Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 17 23:37:59.256692 waagent[1826]: 2026-04-17T23:37:59.246426Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Apr 17 23:37:59.256692 waagent[1826]: 2026-04-17T23:37:59.247731Z INFO Daemon Daemon Test for route to 168.63.129.16 Apr 17 23:37:59.256692 waagent[1826]: 2026-04-17T23:37:59.248386Z INFO Daemon Daemon Route to 168.63.129.16 exists Apr 17 23:37:59.256692 waagent[1826]: 2026-04-17T23:37:59.249448Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Apr 17 23:37:59.266142 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Apr 17 23:37:59.276380 waagent[1826]: 2026-04-17T23:37:59.276321Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Apr 17 23:37:59.280321 waagent[1826]: 2026-04-17T23:37:59.280282Z INFO Daemon Daemon Wire protocol version:2012-11-30 Apr 17 23:37:59.286775 waagent[1826]: 2026-04-17T23:37:59.280468Z INFO Daemon Daemon Server preferred version:2015-04-05 Apr 17 23:37:59.386754 waagent[1826]: 2026-04-17T23:37:59.386619Z INFO Daemon Daemon Initializing goal state during protocol detection Apr 17 23:37:59.390916 waagent[1826]: 2026-04-17T23:37:59.390843Z INFO Daemon Daemon Forcing an update of the goal state. Apr 17 23:37:59.397345 waagent[1826]: 2026-04-17T23:37:59.397289Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Apr 17 23:37:59.416263 waagent[1826]: 2026-04-17T23:37:59.416199Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.181 Apr 17 23:37:59.420902 waagent[1826]: 2026-04-17T23:37:59.416900Z INFO Daemon Apr 17 23:37:59.420902 waagent[1826]: 2026-04-17T23:37:59.417146Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 8e8a5757-6342-41b7-a4dd-24a1ed422a26 eTag: 15728668322281331096 source: Fabric] Apr 17 23:37:59.420902 waagent[1826]: 2026-04-17T23:37:59.418586Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Apr 17 23:37:59.420902 waagent[1826]: 2026-04-17T23:37:59.419795Z INFO Daemon Apr 17 23:37:59.420902 waagent[1826]: 2026-04-17T23:37:59.420227Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Apr 17 23:37:59.438280 waagent[1826]: 2026-04-17T23:37:59.424499Z INFO Daemon Daemon Downloading artifacts profile blob Apr 17 23:37:59.569968 waagent[1826]: 2026-04-17T23:37:59.569825Z INFO Daemon Downloaded certificate {'thumbprint': 'AA5621E789651635CAFF9FC4E614083EC83B43D2', 'hasPrivateKey': True} Apr 17 23:37:59.582132 waagent[1826]: 2026-04-17T23:37:59.582057Z INFO Daemon Fetch goal state completed Apr 17 23:37:59.628912 waagent[1826]: 2026-04-17T23:37:59.628838Z INFO Daemon Daemon Starting provisioning Apr 17 23:37:59.632829 waagent[1826]: 2026-04-17T23:37:59.629148Z INFO Daemon Daemon Handle ovf-env.xml. Apr 17 23:37:59.632829 waagent[1826]: 2026-04-17T23:37:59.630252Z INFO Daemon Daemon Set hostname [ci-4081.3.6-n-b8c45c9493] Apr 17 23:37:59.638750 waagent[1826]: 2026-04-17T23:37:59.638684Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-n-b8c45c9493] Apr 17 23:37:59.642286 waagent[1826]: 2026-04-17T23:37:59.642230Z INFO Daemon Daemon Examine /proc/net/route for primary interface Apr 17 23:37:59.645315 waagent[1826]: 2026-04-17T23:37:59.642566Z INFO Daemon Daemon Primary interface is [eth0] Apr 17 23:37:59.667374 systemd-networkd[1356]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:37:59.667386 systemd-networkd[1356]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:37:59.667418 systemd-networkd[1356]: eth0: DHCP lease lost Apr 17 23:37:59.668732 waagent[1826]: 2026-04-17T23:37:59.668565Z INFO Daemon Daemon Create user account if not exists Apr 17 23:37:59.689329 waagent[1826]: 2026-04-17T23:37:59.669004Z INFO Daemon Daemon User core already exists, skip useradd Apr 17 23:37:59.689329 waagent[1826]: 2026-04-17T23:37:59.671130Z INFO Daemon Daemon Configure sudoer Apr 17 23:37:59.689329 waagent[1826]: 2026-04-17T23:37:59.672472Z INFO Daemon Daemon Configure sshd Apr 17 23:37:59.689329 waagent[1826]: 2026-04-17T23:37:59.673545Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Apr 17 23:37:59.689329 waagent[1826]: 2026-04-17T23:37:59.674407Z INFO Daemon Daemon Deploy ssh public key. Apr 17 23:37:59.689800 systemd-networkd[1356]: eth0: DHCPv6 lease lost Apr 17 23:37:59.719698 systemd-networkd[1356]: eth0: DHCPv4 address 10.0.0.22/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 17 23:38:00.811307 waagent[1826]: 2026-04-17T23:38:00.811241Z INFO Daemon Daemon Provisioning complete Apr 17 23:38:00.823636 waagent[1826]: 2026-04-17T23:38:00.823555Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Apr 17 23:38:00.831974 waagent[1826]: 2026-04-17T23:38:00.823922Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Apr 17 23:38:00.831974 waagent[1826]: 2026-04-17T23:38:00.824602Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Apr 17 23:38:00.949560 waagent[1909]: 2026-04-17T23:38:00.949450Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Apr 17 23:38:00.950033 waagent[1909]: 2026-04-17T23:38:00.949618Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Apr 17 23:38:00.950033 waagent[1909]: 2026-04-17T23:38:00.949725Z INFO ExtHandler ExtHandler Python: 3.11.9 Apr 17 23:38:00.988765 waagent[1909]: 2026-04-17T23:38:00.988648Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Apr 17 23:38:00.988997 waagent[1909]: 2026-04-17T23:38:00.988946Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 17 23:38:00.989091 waagent[1909]: 2026-04-17T23:38:00.989048Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 17 23:38:00.996383 waagent[1909]: 2026-04-17T23:38:00.996313Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Apr 17 23:38:01.001442 waagent[1909]: 2026-04-17T23:38:01.001386Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.181 Apr 17 23:38:01.001928 waagent[1909]: 2026-04-17T23:38:01.001871Z INFO ExtHandler Apr 17 23:38:01.002008 waagent[1909]: 2026-04-17T23:38:01.001967Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 2e9af6a4-4a58-40af-9421-4af798faa343 eTag: 15728668322281331096 source: Fabric] Apr 17 23:38:01.002316 waagent[1909]: 2026-04-17T23:38:01.002264Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Apr 17 23:38:01.002882 waagent[1909]: 2026-04-17T23:38:01.002828Z INFO ExtHandler Apr 17 23:38:01.002946 waagent[1909]: 2026-04-17T23:38:01.002917Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Apr 17 23:38:01.006343 waagent[1909]: 2026-04-17T23:38:01.006302Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Apr 17 23:38:01.067438 waagent[1909]: 2026-04-17T23:38:01.067281Z INFO ExtHandler Downloaded certificate {'thumbprint': 'AA5621E789651635CAFF9FC4E614083EC83B43D2', 'hasPrivateKey': True} Apr 17 23:38:01.067981 waagent[1909]: 2026-04-17T23:38:01.067918Z INFO ExtHandler Fetch goal state completed Apr 17 23:38:01.083788 waagent[1909]: 2026-04-17T23:38:01.083713Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1909 Apr 17 23:38:01.083953 waagent[1909]: 2026-04-17T23:38:01.083904Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Apr 17 23:38:01.085486 waagent[1909]: 2026-04-17T23:38:01.085426Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Apr 17 23:38:01.085854 waagent[1909]: 2026-04-17T23:38:01.085803Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Apr 17 23:38:01.121436 waagent[1909]: 2026-04-17T23:38:01.121383Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Apr 17 23:38:01.121677 waagent[1909]: 2026-04-17T23:38:01.121616Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Apr 17 23:38:01.128259 waagent[1909]: 2026-04-17T23:38:01.128218Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Apr 17 23:38:01.135217 systemd[1]: Reloading requested from client PID 1922 ('systemctl') (unit waagent.service)... Apr 17 23:38:01.135236 systemd[1]: Reloading... Apr 17 23:38:01.234688 zram_generator::config[1959]: No configuration found. Apr 17 23:38:01.350842 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:38:01.433152 systemd[1]: Reloading finished in 297 ms. Apr 17 23:38:01.460672 waagent[1909]: 2026-04-17T23:38:01.458199Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Apr 17 23:38:01.468108 systemd[1]: Reloading requested from client PID 2012 ('systemctl') (unit waagent.service)... Apr 17 23:38:01.468126 systemd[1]: Reloading... Apr 17 23:38:01.563738 zram_generator::config[2048]: No configuration found. Apr 17 23:38:01.695840 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:38:01.777999 systemd[1]: Reloading finished in 309 ms. Apr 17 23:38:01.804042 waagent[1909]: 2026-04-17T23:38:01.803428Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Apr 17 23:38:01.804042 waagent[1909]: 2026-04-17T23:38:01.803615Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Apr 17 23:38:03.152423 waagent[1909]: 2026-04-17T23:38:03.152329Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Apr 17 23:38:03.153104 waagent[1909]: 2026-04-17T23:38:03.153041Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Apr 17 23:38:03.153916 waagent[1909]: 2026-04-17T23:38:03.153833Z INFO ExtHandler ExtHandler Starting env monitor service. Apr 17 23:38:03.154371 waagent[1909]: 2026-04-17T23:38:03.154281Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Apr 17 23:38:03.154562 waagent[1909]: 2026-04-17T23:38:03.154517Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 17 23:38:03.154876 waagent[1909]: 2026-04-17T23:38:03.154807Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Apr 17 23:38:03.154967 waagent[1909]: 2026-04-17T23:38:03.154877Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Apr 17 23:38:03.155210 waagent[1909]: 2026-04-17T23:38:03.155152Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 17 23:38:03.155602 waagent[1909]: 2026-04-17T23:38:03.155542Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Apr 17 23:38:03.155792 waagent[1909]: 2026-04-17T23:38:03.155687Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Apr 17 23:38:03.156004 waagent[1909]: 2026-04-17T23:38:03.155944Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 17 23:38:03.156137 waagent[1909]: 2026-04-17T23:38:03.156068Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 17 23:38:03.156278 waagent[1909]: 2026-04-17T23:38:03.156200Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Apr 17 23:38:03.157108 waagent[1909]: 2026-04-17T23:38:03.157062Z INFO EnvHandler ExtHandler Configure routes Apr 17 23:38:03.157458 waagent[1909]: 2026-04-17T23:38:03.157407Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Apr 17 23:38:03.157901 waagent[1909]: 2026-04-17T23:38:03.157774Z INFO EnvHandler ExtHandler Gateway:None Apr 17 23:38:03.158052 waagent[1909]: 2026-04-17T23:38:03.157967Z INFO EnvHandler ExtHandler Routes:None Apr 17 23:38:03.159053 waagent[1909]: 2026-04-17T23:38:03.158957Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Apr 17 23:38:03.159053 waagent[1909]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Apr 17 23:38:03.159053 waagent[1909]: eth0 00000000 0100000A 0003 0 0 1024 00000000 0 0 0 Apr 17 23:38:03.159053 waagent[1909]: eth0 0000000A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Apr 17 23:38:03.159053 waagent[1909]: eth0 0100000A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Apr 17 23:38:03.159053 waagent[1909]: eth0 10813FA8 0100000A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 17 23:38:03.159053 waagent[1909]: eth0 FEA9FEA9 0100000A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 17 23:38:03.164674 waagent[1909]: 2026-04-17T23:38:03.163328Z INFO ExtHandler ExtHandler Apr 17 23:38:03.164674 waagent[1909]: 2026-04-17T23:38:03.163434Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: f97b90a6-a08a-47da-b680-d9898ec9f978 correlation 421ba571-1b3c-4468-bf39-eb76bd2cb9fe created: 2026-04-17T23:37:00.922455Z] Apr 17 23:38:03.165958 waagent[1909]: 2026-04-17T23:38:03.165908Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Apr 17 23:38:03.166713 waagent[1909]: 2026-04-17T23:38:03.166634Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Apr 17 23:38:03.208592 waagent[1909]: 2026-04-17T23:38:03.208506Z INFO MonitorHandler ExtHandler Network interfaces: Apr 17 23:38:03.208592 waagent[1909]: Executing ['ip', '-a', '-o', 'link']: Apr 17 23:38:03.208592 waagent[1909]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Apr 17 23:38:03.208592 waagent[1909]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:75:dd:31 brd ff:ff:ff:ff:ff:ff Apr 17 23:38:03.208592 waagent[1909]: 3: enP17923s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:75:dd:31 brd ff:ff:ff:ff:ff:ff\ altname enP17923p0s2 Apr 17 23:38:03.208592 waagent[1909]: Executing ['ip', '-4', '-a', '-o', 'address']: Apr 17 23:38:03.208592 waagent[1909]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Apr 17 23:38:03.208592 waagent[1909]: 2: eth0 inet 10.0.0.22/24 metric 1024 brd 10.0.0.255 scope global eth0\ valid_lft forever preferred_lft forever Apr 17 23:38:03.208592 waagent[1909]: Executing ['ip', '-6', '-a', '-o', 'address']: Apr 17 23:38:03.208592 waagent[1909]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Apr 17 23:38:03.208592 waagent[1909]: 2: eth0 inet6 fe80::7eed:8dff:fe75:dd31/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Apr 17 23:38:03.213558 waagent[1909]: 2026-04-17T23:38:03.213493Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 5FFE282D-E955-4348-A72A-FADFF9FDD514;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Apr 17 23:38:03.314077 waagent[1909]: 2026-04-17T23:38:03.313995Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Apr 17 23:38:03.314077 waagent[1909]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 17 23:38:03.314077 waagent[1909]: pkts bytes target prot opt in out source destination Apr 17 23:38:03.314077 waagent[1909]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 17 23:38:03.314077 waagent[1909]: pkts bytes target prot opt in out source destination Apr 17 23:38:03.314077 waagent[1909]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Apr 17 23:38:03.314077 waagent[1909]: pkts bytes target prot opt in out source destination Apr 17 23:38:03.314077 waagent[1909]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 17 23:38:03.314077 waagent[1909]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 17 23:38:03.314077 waagent[1909]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 17 23:38:03.317449 waagent[1909]: 2026-04-17T23:38:03.317384Z INFO EnvHandler ExtHandler Current Firewall rules: Apr 17 23:38:03.317449 waagent[1909]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 17 23:38:03.317449 waagent[1909]: pkts bytes target prot opt in out source destination Apr 17 23:38:03.317449 waagent[1909]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 17 23:38:03.317449 waagent[1909]: pkts bytes target prot opt in out source destination Apr 17 23:38:03.317449 waagent[1909]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Apr 17 23:38:03.317449 waagent[1909]: pkts bytes target prot opt in out source destination Apr 17 23:38:03.317449 waagent[1909]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 17 23:38:03.317449 waagent[1909]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 17 23:38:03.317449 waagent[1909]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 17 23:38:03.317860 waagent[1909]: 2026-04-17T23:38:03.317734Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Apr 17 23:38:08.911277 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 17 23:38:08.917261 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:38:09.026192 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:38:09.041104 (kubelet)[2143]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:38:09.735611 kubelet[2143]: E0417 23:38:09.735561 2143 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:38:09.739670 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:38:09.739889 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:38:17.213021 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 17 23:38:17.217931 systemd[1]: Started sshd@0-10.0.0.22:22-20.229.252.112:45248.service - OpenSSH per-connection server daemon (20.229.252.112:45248). Apr 17 23:38:17.450704 sshd[2150]: Accepted publickey for core from 20.229.252.112 port 45248 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:38:17.451280 sshd[2150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:38:17.455231 systemd-logind[1697]: New session 3 of user core. Apr 17 23:38:17.461824 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 17 23:38:17.584090 systemd[1]: Started sshd@1-10.0.0.22:22-20.229.252.112:45254.service - OpenSSH per-connection server daemon (20.229.252.112:45254). Apr 17 23:38:17.705847 sshd[2155]: Accepted publickey for core from 20.229.252.112 port 45254 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:38:17.707258 sshd[2155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:38:17.711737 systemd-logind[1697]: New session 4 of user core. Apr 17 23:38:17.717794 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 17 23:38:17.816225 sshd[2155]: pam_unix(sshd:session): session closed for user core Apr 17 23:38:17.819291 systemd[1]: sshd@1-10.0.0.22:22-20.229.252.112:45254.service: Deactivated successfully. Apr 17 23:38:17.821264 systemd[1]: session-4.scope: Deactivated successfully. Apr 17 23:38:17.821999 systemd-logind[1697]: Session 4 logged out. Waiting for processes to exit. Apr 17 23:38:17.823411 systemd-logind[1697]: Removed session 4. Apr 17 23:38:17.844076 systemd[1]: Started sshd@2-10.0.0.22:22-20.229.252.112:45264.service - OpenSSH per-connection server daemon (20.229.252.112:45264). Apr 17 23:38:17.963342 sshd[2162]: Accepted publickey for core from 20.229.252.112 port 45264 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:38:17.964842 sshd[2162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:38:17.968999 systemd-logind[1697]: New session 5 of user core. Apr 17 23:38:17.978807 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 17 23:38:18.072634 sshd[2162]: pam_unix(sshd:session): session closed for user core Apr 17 23:38:18.076905 systemd[1]: sshd@2-10.0.0.22:22-20.229.252.112:45264.service: Deactivated successfully. Apr 17 23:38:18.079017 systemd[1]: session-5.scope: Deactivated successfully. Apr 17 23:38:18.079800 systemd-logind[1697]: Session 5 logged out. Waiting for processes to exit. Apr 17 23:38:18.080710 systemd-logind[1697]: Removed session 5. Apr 17 23:38:18.099159 systemd[1]: Started sshd@3-10.0.0.22:22-20.229.252.112:45268.service - OpenSSH per-connection server daemon (20.229.252.112:45268). Apr 17 23:38:18.220373 sshd[2169]: Accepted publickey for core from 20.229.252.112 port 45268 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:38:18.221856 sshd[2169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:38:18.226366 systemd-logind[1697]: New session 6 of user core. Apr 17 23:38:18.231869 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 17 23:38:18.330645 sshd[2169]: pam_unix(sshd:session): session closed for user core Apr 17 23:38:18.334178 systemd-logind[1697]: Session 6 logged out. Waiting for processes to exit. Apr 17 23:38:18.335070 systemd[1]: sshd@3-10.0.0.22:22-20.229.252.112:45268.service: Deactivated successfully. Apr 17 23:38:18.337048 systemd[1]: session-6.scope: Deactivated successfully. Apr 17 23:38:18.337916 systemd-logind[1697]: Removed session 6. Apr 17 23:38:18.356341 systemd[1]: Started sshd@4-10.0.0.22:22-20.229.252.112:45284.service - OpenSSH per-connection server daemon (20.229.252.112:45284). Apr 17 23:38:18.475129 sshd[2176]: Accepted publickey for core from 20.229.252.112 port 45284 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:38:18.476566 sshd[2176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:38:18.481697 systemd-logind[1697]: New session 7 of user core. Apr 17 23:38:18.487858 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 17 23:38:18.713932 sudo[2179]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 17 23:38:18.714319 sudo[2179]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:38:18.740506 sudo[2179]: pam_unix(sudo:session): session closed for user root Apr 17 23:38:18.756860 sshd[2176]: pam_unix(sshd:session): session closed for user core Apr 17 23:38:18.760120 systemd[1]: sshd@4-10.0.0.22:22-20.229.252.112:45284.service: Deactivated successfully. Apr 17 23:38:18.762176 systemd[1]: session-7.scope: Deactivated successfully. Apr 17 23:38:18.763688 systemd-logind[1697]: Session 7 logged out. Waiting for processes to exit. Apr 17 23:38:18.764724 systemd-logind[1697]: Removed session 7. Apr 17 23:38:18.779350 systemd[1]: Started sshd@5-10.0.0.22:22-20.229.252.112:45290.service - OpenSSH per-connection server daemon (20.229.252.112:45290). Apr 17 23:38:18.898633 sshd[2184]: Accepted publickey for core from 20.229.252.112 port 45290 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:38:18.900138 sshd[2184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:38:18.904624 systemd-logind[1697]: New session 8 of user core. Apr 17 23:38:18.913811 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 17 23:38:18.997393 sudo[2188]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 17 23:38:18.997780 sudo[2188]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:38:19.001638 sudo[2188]: pam_unix(sudo:session): session closed for user root Apr 17 23:38:19.006924 sudo[2187]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 17 23:38:19.007277 sudo[2187]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:38:19.020187 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 17 23:38:19.022021 auditctl[2191]: No rules Apr 17 23:38:19.022392 systemd[1]: audit-rules.service: Deactivated successfully. Apr 17 23:38:19.022600 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 17 23:38:19.029055 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:38:19.052282 augenrules[2209]: No rules Apr 17 23:38:19.053889 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:38:19.055162 sudo[2187]: pam_unix(sudo:session): session closed for user root Apr 17 23:38:19.071775 sshd[2184]: pam_unix(sshd:session): session closed for user core Apr 17 23:38:19.074542 systemd[1]: sshd@5-10.0.0.22:22-20.229.252.112:45290.service: Deactivated successfully. Apr 17 23:38:19.076534 systemd[1]: session-8.scope: Deactivated successfully. Apr 17 23:38:19.078128 systemd-logind[1697]: Session 8 logged out. Waiting for processes to exit. Apr 17 23:38:19.079109 systemd-logind[1697]: Removed session 8. Apr 17 23:38:19.095135 systemd[1]: Started sshd@6-10.0.0.22:22-20.229.252.112:45298.service - OpenSSH per-connection server daemon (20.229.252.112:45298). Apr 17 23:38:19.225690 sshd[2217]: Accepted publickey for core from 20.229.252.112 port 45298 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:38:19.226711 sshd[2217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:38:19.230493 systemd-logind[1697]: New session 9 of user core. Apr 17 23:38:19.237077 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 17 23:38:19.319814 sudo[2220]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 17 23:38:19.320173 sudo[2220]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:38:19.800443 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 17 23:38:19.806877 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:38:20.001578 chronyd[1705]: Selected source PHC0 Apr 17 23:38:23.578976 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 17 23:38:23.579046 (dockerd)[2239]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 17 23:38:26.950216 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:38:26.959010 (kubelet)[2245]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:38:26.998431 kubelet[2245]: E0417 23:38:26.998373 2245 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:38:27.001033 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:38:27.001255 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:38:28.231601 dockerd[2239]: time="2026-04-17T23:38:28.231541441Z" level=info msg="Starting up" Apr 17 23:38:29.587063 dockerd[2239]: time="2026-04-17T23:38:29.587010203Z" level=info msg="Loading containers: start." Apr 17 23:38:29.788811 kernel: Initializing XFRM netlink socket Apr 17 23:38:30.133867 systemd-networkd[1356]: docker0: Link UP Apr 17 23:38:30.163931 dockerd[2239]: time="2026-04-17T23:38:30.163886457Z" level=info msg="Loading containers: done." Apr 17 23:38:30.297321 dockerd[2239]: time="2026-04-17T23:38:30.297228012Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 17 23:38:30.297584 dockerd[2239]: time="2026-04-17T23:38:30.297393813Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 17 23:38:30.297584 dockerd[2239]: time="2026-04-17T23:38:30.297536015Z" level=info msg="Daemon has completed initialization" Apr 17 23:38:30.361453 dockerd[2239]: time="2026-04-17T23:38:30.360934211Z" level=info msg="API listen on /run/docker.sock" Apr 17 23:38:30.361056 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 17 23:38:30.921964 containerd[1726]: time="2026-04-17T23:38:30.921908689Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 17 23:38:31.909268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount729373610.mount: Deactivated successfully. Apr 17 23:38:33.666215 containerd[1726]: time="2026-04-17T23:38:33.666152208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:33.671481 containerd[1726]: time="2026-04-17T23:38:33.671427558Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30193997" Apr 17 23:38:33.675945 containerd[1726]: time="2026-04-17T23:38:33.675870700Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:33.683680 containerd[1726]: time="2026-04-17T23:38:33.681922457Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:33.684430 containerd[1726]: time="2026-04-17T23:38:33.684396880Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 2.762439491s" Apr 17 23:38:33.684505 containerd[1726]: time="2026-04-17T23:38:33.684441380Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 17 23:38:33.686280 containerd[1726]: time="2026-04-17T23:38:33.686237097Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 17 23:38:35.699629 containerd[1726]: time="2026-04-17T23:38:35.699571340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:35.703051 containerd[1726]: time="2026-04-17T23:38:35.702989072Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171455" Apr 17 23:38:35.706469 containerd[1726]: time="2026-04-17T23:38:35.706416104Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:35.711259 containerd[1726]: time="2026-04-17T23:38:35.711203949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:35.712470 containerd[1726]: time="2026-04-17T23:38:35.712431561Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 2.026049862s" Apr 17 23:38:35.712948 containerd[1726]: time="2026-04-17T23:38:35.712914465Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 17 23:38:35.713562 containerd[1726]: time="2026-04-17T23:38:35.713533171Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 17 23:38:37.050324 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 17 23:38:37.055892 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:38:37.198169 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:38:37.209957 (kubelet)[2455]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:38:37.861822 kubelet[2455]: E0417 23:38:37.861767 2455 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:38:37.865880 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:38:37.866095 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:38:37.990673 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Apr 17 23:38:38.361663 containerd[1726]: time="2026-04-17T23:38:38.361594250Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:38.364502 containerd[1726]: time="2026-04-17T23:38:38.364433886Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289764" Apr 17 23:38:38.370459 containerd[1726]: time="2026-04-17T23:38:38.370410262Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:38.375781 containerd[1726]: time="2026-04-17T23:38:38.375726229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:38.376937 containerd[1726]: time="2026-04-17T23:38:38.376788743Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 2.663220172s" Apr 17 23:38:38.376937 containerd[1726]: time="2026-04-17T23:38:38.376828043Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 17 23:38:38.377750 containerd[1726]: time="2026-04-17T23:38:38.377727655Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 17 23:38:39.560530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1396493908.mount: Deactivated successfully. Apr 17 23:38:40.120994 containerd[1726]: time="2026-04-17T23:38:40.120931273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:40.131211 containerd[1726]: time="2026-04-17T23:38:40.131128702Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010719" Apr 17 23:38:40.136223 containerd[1726]: time="2026-04-17T23:38:40.136158166Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:40.141806 containerd[1726]: time="2026-04-17T23:38:40.141743837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:40.142935 containerd[1726]: time="2026-04-17T23:38:40.142425845Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 1.76465389s" Apr 17 23:38:40.142935 containerd[1726]: time="2026-04-17T23:38:40.142470046Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 17 23:38:40.143170 containerd[1726]: time="2026-04-17T23:38:40.143138855Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 17 23:38:40.892885 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1392698371.mount: Deactivated successfully. Apr 17 23:38:41.476229 update_engine[1700]: I20260417 23:38:41.476142 1700 update_attempter.cc:509] Updating boot flags... Apr 17 23:38:41.649444 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (2526) Apr 17 23:38:41.766730 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (2515) Apr 17 23:38:42.358464 containerd[1726]: time="2026-04-17T23:38:42.358392962Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:42.361743 containerd[1726]: time="2026-04-17T23:38:42.361676404Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Apr 17 23:38:42.365427 containerd[1726]: time="2026-04-17T23:38:42.365367451Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:42.371987 containerd[1726]: time="2026-04-17T23:38:42.371946434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:42.377689 containerd[1726]: time="2026-04-17T23:38:42.375463379Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.232281023s" Apr 17 23:38:42.377689 containerd[1726]: time="2026-04-17T23:38:42.375513579Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 17 23:38:42.378287 containerd[1726]: time="2026-04-17T23:38:42.378074812Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 17 23:38:42.917743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4141784384.mount: Deactivated successfully. Apr 17 23:38:42.937085 containerd[1726]: time="2026-04-17T23:38:42.937037060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:42.939699 containerd[1726]: time="2026-04-17T23:38:42.939641186Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Apr 17 23:38:42.943626 containerd[1726]: time="2026-04-17T23:38:42.943577326Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:42.951297 containerd[1726]: time="2026-04-17T23:38:42.951248203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:42.952449 containerd[1726]: time="2026-04-17T23:38:42.951963810Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 573.851298ms" Apr 17 23:38:42.952449 containerd[1726]: time="2026-04-17T23:38:42.952001710Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 17 23:38:42.952812 containerd[1726]: time="2026-04-17T23:38:42.952775518Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 17 23:38:43.611626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4030502977.mount: Deactivated successfully. Apr 17 23:38:45.254536 containerd[1726]: time="2026-04-17T23:38:45.254474490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:45.257856 containerd[1726]: time="2026-04-17T23:38:45.257800119Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23719434" Apr 17 23:38:45.261942 containerd[1726]: time="2026-04-17T23:38:45.261884955Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:45.271425 containerd[1726]: time="2026-04-17T23:38:45.271362639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:45.272516 containerd[1726]: time="2026-04-17T23:38:45.272478848Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 2.319585029s" Apr 17 23:38:45.272753 containerd[1726]: time="2026-04-17T23:38:45.272623150Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 17 23:38:47.513847 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:38:47.519932 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:38:47.560701 systemd[1]: Reloading requested from client PID 2688 ('systemctl') (unit session-9.scope)... Apr 17 23:38:47.560906 systemd[1]: Reloading... Apr 17 23:38:47.687688 zram_generator::config[2728]: No configuration found. Apr 17 23:38:47.811768 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:38:47.892913 systemd[1]: Reloading finished in 331 ms. Apr 17 23:38:47.945452 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:38:47.949604 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 23:38:47.949865 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:38:47.955145 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:38:48.313856 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:38:48.329973 (kubelet)[2800]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:38:48.364673 kubelet[2800]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:38:48.364673 kubelet[2800]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 23:38:48.364673 kubelet[2800]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:38:48.365135 kubelet[2800]: I0417 23:38:48.364772 2800 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 23:38:49.130622 kubelet[2800]: I0417 23:38:49.130568 2800 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 17 23:38:49.130622 kubelet[2800]: I0417 23:38:49.130606 2800 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:38:49.131439 kubelet[2800]: I0417 23:38:49.130924 2800 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 23:38:49.211517 kubelet[2800]: E0417 23:38:49.211465 2800 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 23:38:49.217090 kubelet[2800]: I0417 23:38:49.216448 2800 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:38:49.226264 kubelet[2800]: E0417 23:38:49.226218 2800 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:38:49.226264 kubelet[2800]: I0417 23:38:49.226258 2800 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 17 23:38:49.229766 kubelet[2800]: I0417 23:38:49.229732 2800 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 17 23:38:49.230633 kubelet[2800]: I0417 23:38:49.230596 2800 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:38:49.230830 kubelet[2800]: I0417 23:38:49.230632 2800 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-b8c45c9493","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 23:38:49.230983 kubelet[2800]: I0417 23:38:49.230838 2800 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 23:38:49.230983 kubelet[2800]: I0417 23:38:49.230853 2800 container_manager_linux.go:303] "Creating device plugin manager" Apr 17 23:38:49.231061 kubelet[2800]: I0417 23:38:49.231006 2800 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:38:49.235137 kubelet[2800]: I0417 23:38:49.235114 2800 kubelet.go:480] "Attempting to sync node with API server" Apr 17 23:38:49.235137 kubelet[2800]: I0417 23:38:49.235141 2800 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:38:49.235272 kubelet[2800]: I0417 23:38:49.235173 2800 kubelet.go:386] "Adding apiserver pod source" Apr 17 23:38:49.235272 kubelet[2800]: I0417 23:38:49.235198 2800 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:38:49.241468 kubelet[2800]: E0417 23:38:49.241271 2800 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 23:38:49.241468 kubelet[2800]: E0417 23:38:49.241372 2800 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-b8c45c9493&limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 23:38:49.242676 kubelet[2800]: I0417 23:38:49.241954 2800 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:38:49.242676 kubelet[2800]: I0417 23:38:49.242577 2800 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:38:49.244320 kubelet[2800]: W0417 23:38:49.243342 2800 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 17 23:38:49.248071 kubelet[2800]: I0417 23:38:49.248049 2800 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 17 23:38:49.248146 kubelet[2800]: I0417 23:38:49.248103 2800 server.go:1289] "Started kubelet" Apr 17 23:38:49.252450 kubelet[2800]: I0417 23:38:49.252424 2800 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 23:38:49.253804 kubelet[2800]: E0417 23:38:49.251445 2800 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.22:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.22:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-b8c45c9493.18a74942cb477511 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-b8c45c9493,UID:ci-4081.3.6-n-b8c45c9493,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-b8c45c9493,},FirstTimestamp:2026-04-17 23:38:49.248077073 +0000 UTC m=+0.914629136,LastTimestamp:2026-04-17 23:38:49.248077073 +0000 UTC m=+0.914629136,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-b8c45c9493,}" Apr 17 23:38:49.255587 kubelet[2800]: E0417 23:38:49.255563 2800 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 23:38:49.257250 kubelet[2800]: I0417 23:38:49.257215 2800 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:38:49.258348 kubelet[2800]: I0417 23:38:49.258324 2800 server.go:317] "Adding debug handlers to kubelet server" Apr 17 23:38:49.260410 kubelet[2800]: I0417 23:38:49.260353 2800 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 17 23:38:49.262874 kubelet[2800]: I0417 23:38:49.262273 2800 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:38:49.262874 kubelet[2800]: I0417 23:38:49.262521 2800 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:38:49.262874 kubelet[2800]: I0417 23:38:49.262802 2800 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:38:49.263873 kubelet[2800]: E0417 23:38:49.263248 2800 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-b8c45c9493\" not found" Apr 17 23:38:49.263873 kubelet[2800]: I0417 23:38:49.263304 2800 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 17 23:38:49.263873 kubelet[2800]: I0417 23:38:49.263514 2800 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 17 23:38:49.263873 kubelet[2800]: I0417 23:38:49.263562 2800 reconciler.go:26] "Reconciler: start to sync state" Apr 17 23:38:49.264304 kubelet[2800]: I0417 23:38:49.264283 2800 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:38:49.264405 kubelet[2800]: I0417 23:38:49.264387 2800 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:38:49.265037 kubelet[2800]: E0417 23:38:49.264924 2800 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 23:38:49.266085 kubelet[2800]: E0417 23:38:49.266049 2800 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-b8c45c9493?timeout=10s\": dial tcp 10.0.0.22:6443: connect: connection refused" interval="200ms" Apr 17 23:38:49.268690 kubelet[2800]: I0417 23:38:49.266786 2800 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:38:49.292266 kubelet[2800]: I0417 23:38:49.291933 2800 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 23:38:49.292266 kubelet[2800]: I0417 23:38:49.291952 2800 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 23:38:49.292266 kubelet[2800]: I0417 23:38:49.291970 2800 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:38:49.298452 kubelet[2800]: I0417 23:38:49.298409 2800 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 17 23:38:49.298452 kubelet[2800]: I0417 23:38:49.298438 2800 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 17 23:38:49.298579 kubelet[2800]: I0417 23:38:49.298460 2800 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:38:49.298579 kubelet[2800]: I0417 23:38:49.298471 2800 kubelet.go:2436] "Starting kubelet main sync loop" Apr 17 23:38:49.298579 kubelet[2800]: E0417 23:38:49.298512 2800 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:38:49.299093 kubelet[2800]: I0417 23:38:49.298796 2800 policy_none.go:49] "None policy: Start" Apr 17 23:38:49.299093 kubelet[2800]: I0417 23:38:49.298817 2800 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 17 23:38:49.299093 kubelet[2800]: I0417 23:38:49.298831 2800 state_mem.go:35] "Initializing new in-memory state store" Apr 17 23:38:49.302299 kubelet[2800]: E0417 23:38:49.302263 2800 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 23:38:49.307286 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 17 23:38:49.320488 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 17 23:38:49.323486 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 17 23:38:49.336440 kubelet[2800]: E0417 23:38:49.336405 2800 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:38:49.336981 kubelet[2800]: I0417 23:38:49.336642 2800 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 23:38:49.336981 kubelet[2800]: I0417 23:38:49.336673 2800 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:38:49.337124 kubelet[2800]: I0417 23:38:49.337032 2800 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 23:38:49.338282 kubelet[2800]: E0417 23:38:49.338241 2800 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:38:49.338370 kubelet[2800]: E0417 23:38:49.338308 2800 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-b8c45c9493\" not found" Apr 17 23:38:49.429165 systemd[1]: Created slice kubepods-burstable-pod6713f74435384d8413673f35dda561df.slice - libcontainer container kubepods-burstable-pod6713f74435384d8413673f35dda561df.slice. Apr 17 23:38:49.437334 kubelet[2800]: E0417 23:38:49.437305 2800 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-b8c45c9493\" not found" node="ci-4081.3.6-n-b8c45c9493" Apr 17 23:38:49.439789 kubelet[2800]: I0417 23:38:49.439507 2800 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-b8c45c9493" Apr 17 23:38:49.440009 kubelet[2800]: E0417 23:38:49.439982 2800 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.22:6443/api/v1/nodes\": dial tcp 10.0.0.22:6443: connect: connection refused" node="ci-4081.3.6-n-b8c45c9493" Apr 17 23:38:49.444580 systemd[1]: Created slice kubepods-burstable-pod94039209da6718789d281263cda072c0.slice - libcontainer container kubepods-burstable-pod94039209da6718789d281263cda072c0.slice. Apr 17 23:38:49.446872 kubelet[2800]: E0417 23:38:49.446851 2800 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-b8c45c9493\" not found" node="ci-4081.3.6-n-b8c45c9493" Apr 17 23:38:49.448838 systemd[1]: Created slice kubepods-burstable-pod6932912843a610b4b4bcc8cdb17b96b0.slice - libcontainer container kubepods-burstable-pod6932912843a610b4b4bcc8cdb17b96b0.slice. Apr 17 23:38:49.450524 kubelet[2800]: E0417 23:38:49.450497 2800 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-b8c45c9493\" not found" node="ci-4081.3.6-n-b8c45c9493" Apr 17 23:38:49.464792 kubelet[2800]: I0417 23:38:49.464740 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/94039209da6718789d281263cda072c0-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-b8c45c9493\" (UID: \"94039209da6718789d281263cda072c0\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-b8c45c9493" Apr 17 23:38:49.464792 kubelet[2800]: I0417 23:38:49.464786 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6932912843a610b4b4bcc8cdb17b96b0-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-b8c45c9493\" (UID: \"6932912843a610b4b4bcc8cdb17b96b0\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-b8c45c9493" Apr 17 23:38:49.464935 kubelet[2800]: I0417 23:38:49.464809 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6713f74435384d8413673f35dda561df-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-b8c45c9493\" (UID: \"6713f74435384d8413673f35dda561df\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-b8c45c9493" Apr 17 23:38:49.464935 kubelet[2800]: I0417 23:38:49.464834 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6713f74435384d8413673f35dda561df-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-b8c45c9493\" (UID: \"6713f74435384d8413673f35dda561df\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-b8c45c9493" Apr 17 23:38:49.464935 kubelet[2800]: I0417 23:38:49.464856 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6713f74435384d8413673f35dda561df-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-b8c45c9493\" (UID: \"6713f74435384d8413673f35dda561df\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-b8c45c9493" Apr 17 23:38:49.464935 kubelet[2800]: I0417 23:38:49.464878 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6713f74435384d8413673f35dda561df-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-b8c45c9493\" (UID: \"6713f74435384d8413673f35dda561df\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-b8c45c9493" Apr 17 23:38:49.465092 kubelet[2800]: I0417 23:38:49.464935 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6932912843a610b4b4bcc8cdb17b96b0-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-b8c45c9493\" (UID: \"6932912843a610b4b4bcc8cdb17b96b0\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-b8c45c9493" Apr 17 23:38:49.465092 kubelet[2800]: I0417 23:38:49.464961 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6932912843a610b4b4bcc8cdb17b96b0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-b8c45c9493\" (UID: \"6932912843a610b4b4bcc8cdb17b96b0\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-b8c45c9493" Apr 17 23:38:49.465092 kubelet[2800]: I0417 23:38:49.464985 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6713f74435384d8413673f35dda561df-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-b8c45c9493\" (UID: \"6713f74435384d8413673f35dda561df\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-b8c45c9493" Apr 17 23:38:49.467163 kubelet[2800]: E0417 23:38:49.467125 2800 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-b8c45c9493?timeout=10s\": dial tcp 10.0.0.22:6443: connect: connection refused" interval="400ms" Apr 17 23:38:49.642509 kubelet[2800]: I0417 23:38:49.642474 2800 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-b8c45c9493" Apr 17 23:38:49.642859 kubelet[2800]: E0417 23:38:49.642826 2800 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.22:6443/api/v1/nodes\": dial tcp 10.0.0.22:6443: connect: connection refused" node="ci-4081.3.6-n-b8c45c9493" Apr 17 23:38:49.739217 containerd[1726]: time="2026-04-17T23:38:49.739092787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-b8c45c9493,Uid:6713f74435384d8413673f35dda561df,Namespace:kube-system,Attempt:0,}" Apr 17 23:38:49.748697 containerd[1726]: time="2026-04-17T23:38:49.748362368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-b8c45c9493,Uid:94039209da6718789d281263cda072c0,Namespace:kube-system,Attempt:0,}" Apr 17 23:38:49.752415 containerd[1726]: time="2026-04-17T23:38:49.751976600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-b8c45c9493,Uid:6932912843a610b4b4bcc8cdb17b96b0,Namespace:kube-system,Attempt:0,}" Apr 17 23:38:49.867668 kubelet[2800]: E0417 23:38:49.867618 2800 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-b8c45c9493?timeout=10s\": dial tcp 10.0.0.22:6443: connect: connection refused" interval="800ms" Apr 17 23:38:50.044877 kubelet[2800]: I0417 23:38:50.044777 2800 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-b8c45c9493" Apr 17 23:38:50.045322 kubelet[2800]: E0417 23:38:50.045276 2800 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.22:6443/api/v1/nodes\": dial tcp 10.0.0.22:6443: connect: connection refused" node="ci-4081.3.6-n-b8c45c9493" Apr 17 23:38:50.283758 kubelet[2800]: E0417 23:38:50.283714 2800 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 23:38:50.444059 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1217743888.mount: Deactivated successfully. Apr 17 23:38:50.482727 containerd[1726]: time="2026-04-17T23:38:50.482674619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:38:50.485639 containerd[1726]: time="2026-04-17T23:38:50.485587745Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Apr 17 23:38:50.488877 containerd[1726]: time="2026-04-17T23:38:50.488840073Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:38:50.492147 containerd[1726]: time="2026-04-17T23:38:50.492111502Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:38:50.494872 containerd[1726]: time="2026-04-17T23:38:50.494831526Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:38:50.497886 containerd[1726]: time="2026-04-17T23:38:50.497851052Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:38:50.500275 containerd[1726]: time="2026-04-17T23:38:50.500005071Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:38:50.504246 containerd[1726]: time="2026-04-17T23:38:50.504214908Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:38:50.505002 containerd[1726]: time="2026-04-17T23:38:50.504967615Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 752.924415ms" Apr 17 23:38:50.506102 containerd[1726]: time="2026-04-17T23:38:50.506066124Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 766.889237ms" Apr 17 23:38:50.514841 containerd[1726]: time="2026-04-17T23:38:50.514802101Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 766.06243ms" Apr 17 23:38:50.603131 kubelet[2800]: E0417 23:38:50.603089 2800 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-b8c45c9493&limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 23:38:50.668868 kubelet[2800]: E0417 23:38:50.668819 2800 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-b8c45c9493?timeout=10s\": dial tcp 10.0.0.22:6443: connect: connection refused" interval="1.6s" Apr 17 23:38:50.715000 kubelet[2800]: E0417 23:38:50.714883 2800 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 23:38:50.848004 kubelet[2800]: I0417 23:38:50.847970 2800 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-b8c45c9493" Apr 17 23:38:50.848357 kubelet[2800]: E0417 23:38:50.848325 2800 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.22:6443/api/v1/nodes\": dial tcp 10.0.0.22:6443: connect: connection refused" node="ci-4081.3.6-n-b8c45c9493" Apr 17 23:38:50.874424 kubelet[2800]: E0417 23:38:50.874383 2800 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 23:38:51.240781 kubelet[2800]: E0417 23:38:51.240737 2800 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 23:38:51.310329 containerd[1726]: time="2026-04-17T23:38:51.309954886Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:38:51.310329 containerd[1726]: time="2026-04-17T23:38:51.310030987Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:38:51.310329 containerd[1726]: time="2026-04-17T23:38:51.310063087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:38:51.310329 containerd[1726]: time="2026-04-17T23:38:51.310180588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:38:51.314392 containerd[1726]: time="2026-04-17T23:38:51.313605219Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:38:51.314392 containerd[1726]: time="2026-04-17T23:38:51.313728220Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:38:51.314392 containerd[1726]: time="2026-04-17T23:38:51.313994122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:38:51.314392 containerd[1726]: time="2026-04-17T23:38:51.314099823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:38:51.314392 containerd[1726]: time="2026-04-17T23:38:51.314159023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:38:51.314392 containerd[1726]: time="2026-04-17T23:38:51.314213424Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:38:51.314392 containerd[1726]: time="2026-04-17T23:38:51.314234624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:38:51.314392 containerd[1726]: time="2026-04-17T23:38:51.314336525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:38:51.348853 systemd[1]: Started cri-containerd-757256cd11edf0ed3e60b8b514cffeb7a8ad3ad81166e99014d88a9636c2e7ee.scope - libcontainer container 757256cd11edf0ed3e60b8b514cffeb7a8ad3ad81166e99014d88a9636c2e7ee. Apr 17 23:38:51.350760 systemd[1]: Started cri-containerd-b4f0e990155774e543a1fa8757f4eff6f617c94a03443382cf3b7d13b785d412.scope - libcontainer container b4f0e990155774e543a1fa8757f4eff6f617c94a03443382cf3b7d13b785d412. Apr 17 23:38:51.356088 systemd[1]: Started cri-containerd-1a832f87fba44ac22f5f17d1e6dcfad7162887b8346d2c700fb54c4542981923.scope - libcontainer container 1a832f87fba44ac22f5f17d1e6dcfad7162887b8346d2c700fb54c4542981923. Apr 17 23:38:51.418551 containerd[1726]: time="2026-04-17T23:38:51.418422039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-b8c45c9493,Uid:6713f74435384d8413673f35dda561df,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a832f87fba44ac22f5f17d1e6dcfad7162887b8346d2c700fb54c4542981923\"" Apr 17 23:38:51.443846 containerd[1726]: time="2026-04-17T23:38:51.443083056Z" level=info msg="CreateContainer within sandbox \"1a832f87fba44ac22f5f17d1e6dcfad7162887b8346d2c700fb54c4542981923\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 17 23:38:51.446550 containerd[1726]: time="2026-04-17T23:38:51.446497286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-b8c45c9493,Uid:6932912843a610b4b4bcc8cdb17b96b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"757256cd11edf0ed3e60b8b514cffeb7a8ad3ad81166e99014d88a9636c2e7ee\"" Apr 17 23:38:51.455710 containerd[1726]: time="2026-04-17T23:38:51.455671967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-b8c45c9493,Uid:94039209da6718789d281263cda072c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4f0e990155774e543a1fa8757f4eff6f617c94a03443382cf3b7d13b785d412\"" Apr 17 23:38:51.457762 containerd[1726]: time="2026-04-17T23:38:51.457724285Z" level=info msg="CreateContainer within sandbox \"757256cd11edf0ed3e60b8b514cffeb7a8ad3ad81166e99014d88a9636c2e7ee\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 17 23:38:51.465320 containerd[1726]: time="2026-04-17T23:38:51.465281051Z" level=info msg="CreateContainer within sandbox \"b4f0e990155774e543a1fa8757f4eff6f617c94a03443382cf3b7d13b785d412\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 17 23:38:51.481117 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1009028846.mount: Deactivated successfully. Apr 17 23:38:51.508386 containerd[1726]: time="2026-04-17T23:38:51.508040027Z" level=info msg="CreateContainer within sandbox \"1a832f87fba44ac22f5f17d1e6dcfad7162887b8346d2c700fb54c4542981923\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"207a5fe30a07f602997a1a53d6f168d2f0646992d13ff37219b070d013cb00d9\"" Apr 17 23:38:51.510019 containerd[1726]: time="2026-04-17T23:38:51.509992944Z" level=info msg="StartContainer for \"207a5fe30a07f602997a1a53d6f168d2f0646992d13ff37219b070d013cb00d9\"" Apr 17 23:38:51.524488 containerd[1726]: time="2026-04-17T23:38:51.524436171Z" level=info msg="CreateContainer within sandbox \"757256cd11edf0ed3e60b8b514cffeb7a8ad3ad81166e99014d88a9636c2e7ee\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"767dfcad95c60bdbee4bf5da5523810e07953cbdadb96f7b5c734786d11f9b46\"" Apr 17 23:38:51.526130 containerd[1726]: time="2026-04-17T23:38:51.525040276Z" level=info msg="StartContainer for \"767dfcad95c60bdbee4bf5da5523810e07953cbdadb96f7b5c734786d11f9b46\"" Apr 17 23:38:51.538349 containerd[1726]: time="2026-04-17T23:38:51.538316593Z" level=info msg="CreateContainer within sandbox \"b4f0e990155774e543a1fa8757f4eff6f617c94a03443382cf3b7d13b785d412\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3055abc3de28fd2ea9b8a0e9e207676ef2ebc138b898b865b473951c60a12cf4\"" Apr 17 23:38:51.539487 containerd[1726]: time="2026-04-17T23:38:51.539348402Z" level=info msg="StartContainer for \"3055abc3de28fd2ea9b8a0e9e207676ef2ebc138b898b865b473951c60a12cf4\"" Apr 17 23:38:51.542403 systemd[1]: Started cri-containerd-207a5fe30a07f602997a1a53d6f168d2f0646992d13ff37219b070d013cb00d9.scope - libcontainer container 207a5fe30a07f602997a1a53d6f168d2f0646992d13ff37219b070d013cb00d9. Apr 17 23:38:51.573930 systemd[1]: Started cri-containerd-767dfcad95c60bdbee4bf5da5523810e07953cbdadb96f7b5c734786d11f9b46.scope - libcontainer container 767dfcad95c60bdbee4bf5da5523810e07953cbdadb96f7b5c734786d11f9b46. Apr 17 23:38:51.585955 systemd[1]: Started cri-containerd-3055abc3de28fd2ea9b8a0e9e207676ef2ebc138b898b865b473951c60a12cf4.scope - libcontainer container 3055abc3de28fd2ea9b8a0e9e207676ef2ebc138b898b865b473951c60a12cf4. Apr 17 23:38:51.632579 containerd[1726]: time="2026-04-17T23:38:51.632530020Z" level=info msg="StartContainer for \"207a5fe30a07f602997a1a53d6f168d2f0646992d13ff37219b070d013cb00d9\" returns successfully" Apr 17 23:38:51.666286 containerd[1726]: time="2026-04-17T23:38:51.666209916Z" level=info msg="StartContainer for \"767dfcad95c60bdbee4bf5da5523810e07953cbdadb96f7b5c734786d11f9b46\" returns successfully" Apr 17 23:38:51.685779 containerd[1726]: time="2026-04-17T23:38:51.685729088Z" level=info msg="StartContainer for \"3055abc3de28fd2ea9b8a0e9e207676ef2ebc138b898b865b473951c60a12cf4\" returns successfully" Apr 17 23:38:52.316105 kubelet[2800]: E0417 23:38:52.315446 2800 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-b8c45c9493\" not found" node="ci-4081.3.6-n-b8c45c9493" Apr 17 23:38:52.322285 kubelet[2800]: E0417 23:38:52.322258 2800 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-b8c45c9493\" not found" node="ci-4081.3.6-n-b8c45c9493" Apr 17 23:38:52.328743 kubelet[2800]: E0417 23:38:52.326517 2800 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-b8c45c9493\" not found" node="ci-4081.3.6-n-b8c45c9493" Apr 17 23:38:52.451679 kubelet[2800]: I0417 23:38:52.450596 2800 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-b8c45c9493" Apr 17 23:38:53.238484 kubelet[2800]: E0417 23:38:53.238431 2800 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-b8c45c9493\" not found" node="ci-4081.3.6-n-b8c45c9493" Apr 17 23:38:53.242809 kubelet[2800]: I0417 23:38:53.242767 2800 apiserver.go:52] "Watching apiserver" Apr 17 23:38:53.263765 kubelet[2800]: I0417 23:38:53.263730 2800 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 17 23:38:53.328531 kubelet[2800]: E0417 23:38:53.328494 2800 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-b8c45c9493\" not found" node="ci-4081.3.6-n-b8c45c9493" Apr 17 23:38:53.329222 kubelet[2800]: E0417 23:38:53.329087 2800 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-b8c45c9493\" not found" node="ci-4081.3.6-n-b8c45c9493" Apr 17 23:38:53.379394 kubelet[2800]: I0417 23:38:53.378462 2800 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-b8c45c9493" Apr 17 23:38:53.379394 kubelet[2800]: E0417 23:38:53.378503 2800 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081.3.6-n-b8c45c9493\": node \"ci-4081.3.6-n-b8c45c9493\" not found" Apr 17 23:38:53.466718 kubelet[2800]: I0417 23:38:53.466521 2800 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-b8c45c9493" Apr 17 23:38:53.476145 kubelet[2800]: E0417 23:38:53.475915 2800 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-b8c45c9493\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-b8c45c9493" Apr 17 23:38:53.476145 kubelet[2800]: I0417 23:38:53.475953 2800 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-b8c45c9493" Apr 17 23:38:53.478137 kubelet[2800]: E0417 23:38:53.478094 2800 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-b8c45c9493\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-b8c45c9493" Apr 17 23:38:53.478137 kubelet[2800]: I0417 23:38:53.478125 2800 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-b8c45c9493" Apr 17 23:38:53.480587 kubelet[2800]: E0417 23:38:53.480558 2800 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-b8c45c9493\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-b8c45c9493" Apr 17 23:38:53.482897 kubelet[2800]: I0417 23:38:53.482873 2800 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-b8c45c9493" Apr 17 23:38:53.484897 kubelet[2800]: E0417 23:38:53.484872 2800 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-b8c45c9493\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-b8c45c9493" Apr 17 23:38:55.905746 systemd[1]: Reloading requested from client PID 3079 ('systemctl') (unit session-9.scope)... Apr 17 23:38:55.905767 systemd[1]: Reloading... Apr 17 23:38:56.004481 zram_generator::config[3122]: No configuration found. Apr 17 23:38:56.126856 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:38:56.223014 systemd[1]: Reloading finished in 316 ms. Apr 17 23:38:56.264358 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:38:56.282140 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 23:38:56.282393 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:38:56.288979 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:39:01.601772 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:39:01.611014 (kubelet)[3186]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:39:01.658728 kubelet[3186]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:39:01.658728 kubelet[3186]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 23:39:01.658728 kubelet[3186]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:39:01.659213 kubelet[3186]: I0417 23:39:01.658819 3186 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 23:39:01.667767 kubelet[3186]: I0417 23:39:01.667737 3186 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 17 23:39:01.668082 kubelet[3186]: I0417 23:39:01.667864 3186 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:39:01.668173 kubelet[3186]: I0417 23:39:01.668163 3186 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 23:39:01.669269 kubelet[3186]: I0417 23:39:01.669253 3186 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 17 23:39:01.673169 kubelet[3186]: I0417 23:39:01.673146 3186 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:39:01.678531 kubelet[3186]: E0417 23:39:01.678425 3186 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:39:01.678531 kubelet[3186]: I0417 23:39:01.678491 3186 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 17 23:39:01.684042 kubelet[3186]: I0417 23:39:01.683912 3186 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 17 23:39:01.685367 kubelet[3186]: I0417 23:39:01.684566 3186 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:39:01.685367 kubelet[3186]: I0417 23:39:01.684604 3186 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-b8c45c9493","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 23:39:01.685367 kubelet[3186]: I0417 23:39:01.684975 3186 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 23:39:01.685367 kubelet[3186]: I0417 23:39:01.684994 3186 container_manager_linux.go:303] "Creating device plugin manager" Apr 17 23:39:01.685367 kubelet[3186]: I0417 23:39:01.685053 3186 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:39:01.685767 kubelet[3186]: I0417 23:39:01.685256 3186 kubelet.go:480] "Attempting to sync node with API server" Apr 17 23:39:01.685767 kubelet[3186]: I0417 23:39:01.685275 3186 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:39:01.685767 kubelet[3186]: I0417 23:39:01.685306 3186 kubelet.go:386] "Adding apiserver pod source" Apr 17 23:39:01.685767 kubelet[3186]: I0417 23:39:01.685327 3186 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:39:01.698743 kubelet[3186]: I0417 23:39:01.698714 3186 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:39:01.700577 kubelet[3186]: I0417 23:39:01.700445 3186 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:39:01.707275 kubelet[3186]: I0417 23:39:01.707205 3186 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 17 23:39:01.707685 kubelet[3186]: I0417 23:39:01.707451 3186 server.go:1289] "Started kubelet" Apr 17 23:39:01.707828 kubelet[3186]: I0417 23:39:01.707801 3186 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:39:01.708885 kubelet[3186]: I0417 23:39:01.708605 3186 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:39:01.709124 kubelet[3186]: I0417 23:39:01.709108 3186 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:39:01.709913 kubelet[3186]: I0417 23:39:01.709895 3186 server.go:317] "Adding debug handlers to kubelet server" Apr 17 23:39:01.713809 kubelet[3186]: I0417 23:39:01.713550 3186 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 23:39:01.716649 kubelet[3186]: I0417 23:39:01.716632 3186 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:39:01.728634 kubelet[3186]: I0417 23:39:01.728599 3186 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 17 23:39:01.729782 kubelet[3186]: E0417 23:39:01.728904 3186 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-b8c45c9493\" not found" Apr 17 23:39:01.729782 kubelet[3186]: I0417 23:39:01.729116 3186 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 17 23:39:01.729782 kubelet[3186]: I0417 23:39:01.729249 3186 reconciler.go:26] "Reconciler: start to sync state" Apr 17 23:39:01.743872 kubelet[3186]: E0417 23:39:01.743836 3186 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 23:39:01.747826 kubelet[3186]: I0417 23:39:01.747799 3186 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:39:01.748359 kubelet[3186]: I0417 23:39:01.748331 3186 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:39:01.750581 kubelet[3186]: I0417 23:39:01.750430 3186 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 17 23:39:02.337887 kubelet[3186]: I0417 23:39:01.752823 3186 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 17 23:39:02.337887 kubelet[3186]: I0417 23:39:01.752843 3186 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 17 23:39:02.337887 kubelet[3186]: I0417 23:39:01.752865 3186 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:39:02.337887 kubelet[3186]: I0417 23:39:01.752876 3186 kubelet.go:2436] "Starting kubelet main sync loop" Apr 17 23:39:02.337887 kubelet[3186]: E0417 23:39:01.752920 3186 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:39:02.337887 kubelet[3186]: I0417 23:39:01.756943 3186 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:39:02.337887 kubelet[3186]: I0417 23:39:01.832749 3186 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 23:39:02.337887 kubelet[3186]: I0417 23:39:01.832764 3186 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 23:39:02.337887 kubelet[3186]: I0417 23:39:01.832782 3186 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:39:02.337887 kubelet[3186]: E0417 23:39:01.853855 3186 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 17 23:39:02.337887 kubelet[3186]: E0417 23:39:02.054173 3186 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 17 23:39:02.338717 kubelet[3186]: I0417 23:39:02.338380 3186 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 17 23:39:02.338717 kubelet[3186]: I0417 23:39:02.338406 3186 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 17 23:39:02.338717 kubelet[3186]: I0417 23:39:02.338435 3186 policy_none.go:49] "None policy: Start" Apr 17 23:39:02.338717 kubelet[3186]: I0417 23:39:02.338452 3186 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 17 23:39:02.338717 kubelet[3186]: I0417 23:39:02.338469 3186 state_mem.go:35] "Initializing new in-memory state store" Apr 17 23:39:02.338717 kubelet[3186]: I0417 23:39:02.338608 3186 state_mem.go:75] "Updated machine memory state" Apr 17 23:39:02.350753 kubelet[3186]: E0417 23:39:02.350451 3186 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:39:02.353304 kubelet[3186]: I0417 23:39:02.353017 3186 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 23:39:02.353304 kubelet[3186]: I0417 23:39:02.353089 3186 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:39:02.354603 kubelet[3186]: I0417 23:39:02.353872 3186 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 23:39:02.355203 kubelet[3186]: E0417 23:39:02.355162 3186 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:39:02.456380 kubelet[3186]: I0417 23:39:02.455355 3186 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-b8c45c9493" Apr 17 23:39:02.456380 kubelet[3186]: I0417 23:39:02.455862 3186 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-b8c45c9493" Apr 17 23:39:02.456380 kubelet[3186]: I0417 23:39:02.456185 3186 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-b8c45c9493" Apr 17 23:39:02.465951 kubelet[3186]: I0417 23:39:02.465456 3186 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 17 23:39:02.472801 kubelet[3186]: I0417 23:39:02.472749 3186 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-b8c45c9493" Apr 17 23:39:02.474428 kubelet[3186]: I0417 23:39:02.473240 3186 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 17 23:39:02.474743 kubelet[3186]: I0417 23:39:02.473567 3186 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 17 23:39:02.486705 kubelet[3186]: I0417 23:39:02.486422 3186 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-b8c45c9493" Apr 17 23:39:02.486705 kubelet[3186]: I0417 23:39:02.486510 3186 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-b8c45c9493" Apr 17 23:39:02.486705 kubelet[3186]: I0417 23:39:02.486537 3186 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 17 23:39:02.487303 containerd[1726]: time="2026-04-17T23:39:02.487258573Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 17 23:39:02.488646 kubelet[3186]: I0417 23:39:02.487545 3186 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 17 23:39:02.533872 kubelet[3186]: I0417 23:39:02.533829 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6932912843a610b4b4bcc8cdb17b96b0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-b8c45c9493\" (UID: \"6932912843a610b4b4bcc8cdb17b96b0\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-b8c45c9493" Apr 17 23:39:02.533872 kubelet[3186]: I0417 23:39:02.533877 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6713f74435384d8413673f35dda561df-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-b8c45c9493\" (UID: \"6713f74435384d8413673f35dda561df\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-b8c45c9493" Apr 17 23:39:02.534078 kubelet[3186]: I0417 23:39:02.533899 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6713f74435384d8413673f35dda561df-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-b8c45c9493\" (UID: \"6713f74435384d8413673f35dda561df\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-b8c45c9493" Apr 17 23:39:02.534078 kubelet[3186]: I0417 23:39:02.533924 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6713f74435384d8413673f35dda561df-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-b8c45c9493\" (UID: \"6713f74435384d8413673f35dda561df\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-b8c45c9493" Apr 17 23:39:02.534078 kubelet[3186]: I0417 23:39:02.533944 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/94039209da6718789d281263cda072c0-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-b8c45c9493\" (UID: \"94039209da6718789d281263cda072c0\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-b8c45c9493" Apr 17 23:39:02.534078 kubelet[3186]: I0417 23:39:02.533970 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6932912843a610b4b4bcc8cdb17b96b0-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-b8c45c9493\" (UID: \"6932912843a610b4b4bcc8cdb17b96b0\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-b8c45c9493" Apr 17 23:39:02.534078 kubelet[3186]: I0417 23:39:02.533992 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6932912843a610b4b4bcc8cdb17b96b0-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-b8c45c9493\" (UID: \"6932912843a610b4b4bcc8cdb17b96b0\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-b8c45c9493" Apr 17 23:39:02.534325 kubelet[3186]: I0417 23:39:02.534011 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6713f74435384d8413673f35dda561df-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-b8c45c9493\" (UID: \"6713f74435384d8413673f35dda561df\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-b8c45c9493" Apr 17 23:39:02.534325 kubelet[3186]: I0417 23:39:02.534031 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6713f74435384d8413673f35dda561df-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-b8c45c9493\" (UID: \"6713f74435384d8413673f35dda561df\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-b8c45c9493" Apr 17 23:39:02.694264 kubelet[3186]: I0417 23:39:02.693739 3186 apiserver.go:52] "Watching apiserver" Apr 17 23:39:02.736590 kubelet[3186]: I0417 23:39:02.735493 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/39077634-d8c3-4105-9e28-9177b63e3e57-xtables-lock\") pod \"kube-proxy-bdlc8\" (UID: \"39077634-d8c3-4105-9e28-9177b63e3e57\") " pod="kube-system/kube-proxy-bdlc8" Apr 17 23:39:02.736590 kubelet[3186]: I0417 23:39:02.735541 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/39077634-d8c3-4105-9e28-9177b63e3e57-lib-modules\") pod \"kube-proxy-bdlc8\" (UID: \"39077634-d8c3-4105-9e28-9177b63e3e57\") " pod="kube-system/kube-proxy-bdlc8" Apr 17 23:39:02.736590 kubelet[3186]: I0417 23:39:02.735591 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/39077634-d8c3-4105-9e28-9177b63e3e57-kube-proxy\") pod \"kube-proxy-bdlc8\" (UID: \"39077634-d8c3-4105-9e28-9177b63e3e57\") " pod="kube-system/kube-proxy-bdlc8" Apr 17 23:39:02.736590 kubelet[3186]: I0417 23:39:02.735617 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brfpw\" (UniqueName: \"kubernetes.io/projected/39077634-d8c3-4105-9e28-9177b63e3e57-kube-api-access-brfpw\") pod \"kube-proxy-bdlc8\" (UID: \"39077634-d8c3-4105-9e28-9177b63e3e57\") " pod="kube-system/kube-proxy-bdlc8" Apr 17 23:39:02.763329 systemd[1]: Created slice kubepods-besteffort-pod39077634_d8c3_4105_9e28_9177b63e3e57.slice - libcontainer container kubepods-besteffort-pod39077634_d8c3_4105_9e28_9177b63e3e57.slice. Apr 17 23:39:02.819740 kubelet[3186]: I0417 23:39:02.819423 3186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-b8c45c9493" podStartSLOduration=0.81940332 podStartE2EDuration="819.40332ms" podCreationTimestamp="2026-04-17 23:39:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:39:02.794419283 +0000 UTC m=+1.176255543" watchObservedRunningTime="2026-04-17 23:39:02.81940332 +0000 UTC m=+1.201239580" Apr 17 23:39:02.829377 kubelet[3186]: I0417 23:39:02.829345 3186 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 17 23:39:02.882584 kubelet[3186]: I0417 23:39:02.881156 3186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-b8c45c9493" podStartSLOduration=0.881080904 podStartE2EDuration="881.080904ms" podCreationTimestamp="2026-04-17 23:39:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:39:02.833775156 +0000 UTC m=+1.215611416" watchObservedRunningTime="2026-04-17 23:39:02.881080904 +0000 UTC m=+1.262917164" Apr 17 23:39:02.882584 kubelet[3186]: I0417 23:39:02.881318 3186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-b8c45c9493" podStartSLOduration=0.881306906 podStartE2EDuration="881.306906ms" podCreationTimestamp="2026-04-17 23:39:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:39:02.880612699 +0000 UTC m=+1.262449059" watchObservedRunningTime="2026-04-17 23:39:02.881306906 +0000 UTC m=+1.263143266" Apr 17 23:39:03.077414 containerd[1726]: time="2026-04-17T23:39:03.077358063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bdlc8,Uid:39077634-d8c3-4105-9e28-9177b63e3e57,Namespace:kube-system,Attempt:0,}" Apr 17 23:39:05.882949 kubelet[3186]: E0417 23:39:05.882902 3186 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.13s" Apr 17 23:39:06.215470 systemd[1]: Created slice kubepods-besteffort-pod3ebe97ac_43af_4c9b_af14_df7a901b9092.slice - libcontainer container kubepods-besteffort-pod3ebe97ac_43af_4c9b_af14_df7a901b9092.slice. Apr 17 23:39:06.223620 containerd[1726]: time="2026-04-17T23:39:06.222709158Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:39:06.223620 containerd[1726]: time="2026-04-17T23:39:06.222779759Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:39:06.223620 containerd[1726]: time="2026-04-17T23:39:06.222801459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:06.223620 containerd[1726]: time="2026-04-17T23:39:06.222893560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:06.257944 systemd[1]: Started cri-containerd-d96a346f31f3788a711f187e0105f9d8ecc04b1997175e761fcdfd20e9d7eda6.scope - libcontainer container d96a346f31f3788a711f187e0105f9d8ecc04b1997175e761fcdfd20e9d7eda6. Apr 17 23:39:06.258345 kubelet[3186]: I0417 23:39:06.258309 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3ebe97ac-43af-4c9b-af14-df7a901b9092-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-66rtn\" (UID: \"3ebe97ac-43af-4c9b-af14-df7a901b9092\") " pod="tigera-operator/tigera-operator-6bf85f8dd-66rtn" Apr 17 23:39:06.258451 kubelet[3186]: I0417 23:39:06.258357 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24vgs\" (UniqueName: \"kubernetes.io/projected/3ebe97ac-43af-4c9b-af14-df7a901b9092-kube-api-access-24vgs\") pod \"tigera-operator-6bf85f8dd-66rtn\" (UID: \"3ebe97ac-43af-4c9b-af14-df7a901b9092\") " pod="tigera-operator/tigera-operator-6bf85f8dd-66rtn" Apr 17 23:39:06.279986 containerd[1726]: time="2026-04-17T23:39:06.279937500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bdlc8,Uid:39077634-d8c3-4105-9e28-9177b63e3e57,Namespace:kube-system,Attempt:0,} returns sandbox id \"d96a346f31f3788a711f187e0105f9d8ecc04b1997175e761fcdfd20e9d7eda6\"" Apr 17 23:39:06.299479 containerd[1726]: time="2026-04-17T23:39:06.299317184Z" level=info msg="CreateContainer within sandbox \"d96a346f31f3788a711f187e0105f9d8ecc04b1997175e761fcdfd20e9d7eda6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 17 23:39:06.644827 containerd[1726]: time="2026-04-17T23:39:06.644518753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-66rtn,Uid:3ebe97ac-43af-4c9b-af14-df7a901b9092,Namespace:tigera-operator,Attempt:0,}" Apr 17 23:39:06.782899 containerd[1726]: time="2026-04-17T23:39:06.782848264Z" level=info msg="CreateContainer within sandbox \"d96a346f31f3788a711f187e0105f9d8ecc04b1997175e761fcdfd20e9d7eda6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"46cf56ca21c2f144d65e6eaa5d5f7d0fae9797355cbb971766aae4d1913bf71a\"" Apr 17 23:39:06.785602 containerd[1726]: time="2026-04-17T23:39:06.783814273Z" level=info msg="StartContainer for \"46cf56ca21c2f144d65e6eaa5d5f7d0fae9797355cbb971766aae4d1913bf71a\"" Apr 17 23:39:06.810834 systemd[1]: Started cri-containerd-46cf56ca21c2f144d65e6eaa5d5f7d0fae9797355cbb971766aae4d1913bf71a.scope - libcontainer container 46cf56ca21c2f144d65e6eaa5d5f7d0fae9797355cbb971766aae4d1913bf71a. Apr 17 23:39:06.941784 containerd[1726]: time="2026-04-17T23:39:06.941645268Z" level=info msg="StartContainer for \"46cf56ca21c2f144d65e6eaa5d5f7d0fae9797355cbb971766aae4d1913bf71a\" returns successfully" Apr 17 23:39:07.055080 containerd[1726]: time="2026-04-17T23:39:07.054918441Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:39:07.055080 containerd[1726]: time="2026-04-17T23:39:07.054980642Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:39:07.055080 containerd[1726]: time="2026-04-17T23:39:07.055011442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:07.055915 containerd[1726]: time="2026-04-17T23:39:07.055776849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:07.072866 systemd[1]: Started cri-containerd-a116b92fc7e73fb8f7f01aca15be590415bbef50fc3ce29ae1546953c47d25f8.scope - libcontainer container a116b92fc7e73fb8f7f01aca15be590415bbef50fc3ce29ae1546953c47d25f8. Apr 17 23:39:07.129921 containerd[1726]: time="2026-04-17T23:39:07.129109744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-66rtn,Uid:3ebe97ac-43af-4c9b-af14-df7a901b9092,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a116b92fc7e73fb8f7f01aca15be590415bbef50fc3ce29ae1546953c47d25f8\"" Apr 17 23:39:07.134311 containerd[1726]: time="2026-04-17T23:39:07.133360484Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 17 23:39:07.847888 kubelet[3186]: I0417 23:39:07.847633 3186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bdlc8" podStartSLOduration=5.84761635 podStartE2EDuration="5.84761635s" podCreationTimestamp="2026-04-17 23:39:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:39:07.847403348 +0000 UTC m=+6.229239708" watchObservedRunningTime="2026-04-17 23:39:07.84761635 +0000 UTC m=+6.229452610" Apr 17 23:39:14.641042 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3427658040.mount: Deactivated successfully. Apr 17 23:39:15.655594 containerd[1726]: time="2026-04-17T23:39:15.655537184Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:15.658080 containerd[1726]: time="2026-04-17T23:39:15.658014005Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 17 23:39:15.661803 containerd[1726]: time="2026-04-17T23:39:15.661741036Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:15.666335 containerd[1726]: time="2026-04-17T23:39:15.666298275Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:15.667164 containerd[1726]: time="2026-04-17T23:39:15.666983380Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 8.533579596s" Apr 17 23:39:15.667164 containerd[1726]: time="2026-04-17T23:39:15.667026481Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 17 23:39:15.674343 containerd[1726]: time="2026-04-17T23:39:15.674300442Z" level=info msg="CreateContainer within sandbox \"a116b92fc7e73fb8f7f01aca15be590415bbef50fc3ce29ae1546953c47d25f8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 17 23:39:15.710510 containerd[1726]: time="2026-04-17T23:39:15.710465948Z" level=info msg="CreateContainer within sandbox \"a116b92fc7e73fb8f7f01aca15be590415bbef50fc3ce29ae1546953c47d25f8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c48f85e0b28682a79817d23ae72a74537dfa2b7ce8a2f6b7f03128a9641c7081\"" Apr 17 23:39:15.711058 containerd[1726]: time="2026-04-17T23:39:15.711001153Z" level=info msg="StartContainer for \"c48f85e0b28682a79817d23ae72a74537dfa2b7ce8a2f6b7f03128a9641c7081\"" Apr 17 23:39:15.744825 systemd[1]: Started cri-containerd-c48f85e0b28682a79817d23ae72a74537dfa2b7ce8a2f6b7f03128a9641c7081.scope - libcontainer container c48f85e0b28682a79817d23ae72a74537dfa2b7ce8a2f6b7f03128a9641c7081. Apr 17 23:39:15.774443 containerd[1726]: time="2026-04-17T23:39:15.774380288Z" level=info msg="StartContainer for \"c48f85e0b28682a79817d23ae72a74537dfa2b7ce8a2f6b7f03128a9641c7081\" returns successfully" Apr 17 23:39:22.069045 sudo[2220]: pam_unix(sudo:session): session closed for user root Apr 17 23:39:22.088843 sshd[2217]: pam_unix(sshd:session): session closed for user core Apr 17 23:39:22.097076 systemd[1]: sshd@6-10.0.0.22:22-20.229.252.112:45298.service: Deactivated successfully. Apr 17 23:39:22.098203 systemd-logind[1697]: Session 9 logged out. Waiting for processes to exit. Apr 17 23:39:22.100395 systemd[1]: session-9.scope: Deactivated successfully. Apr 17 23:39:22.102054 systemd[1]: session-9.scope: Consumed 4.463s CPU time, 156.3M memory peak, 0B memory swap peak. Apr 17 23:39:22.103489 systemd-logind[1697]: Removed session 9. Apr 17 23:39:25.477103 kubelet[3186]: I0417 23:39:25.477025 3186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-66rtn" podStartSLOduration=10.941129285 podStartE2EDuration="19.4768985s" podCreationTimestamp="2026-04-17 23:39:06 +0000 UTC" firstStartedPulling="2026-04-17 23:39:07.132400975 +0000 UTC m=+5.514237235" lastFinishedPulling="2026-04-17 23:39:15.66817019 +0000 UTC m=+14.050006450" observedRunningTime="2026-04-17 23:39:15.866243865 +0000 UTC m=+14.248080225" watchObservedRunningTime="2026-04-17 23:39:25.4768985 +0000 UTC m=+23.858734860" Apr 17 23:39:25.498255 systemd[1]: Created slice kubepods-besteffort-podf2c06e2c_b96d_4d45_8b09_d8a9cbb78e3a.slice - libcontainer container kubepods-besteffort-podf2c06e2c_b96d_4d45_8b09_d8a9cbb78e3a.slice. Apr 17 23:39:25.591430 kubelet[3186]: I0417 23:39:25.591388 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2c06e2c-b96d-4d45-8b09-d8a9cbb78e3a-tigera-ca-bundle\") pod \"calico-typha-f56bb874f-89gft\" (UID: \"f2c06e2c-b96d-4d45-8b09-d8a9cbb78e3a\") " pod="calico-system/calico-typha-f56bb874f-89gft" Apr 17 23:39:25.591594 kubelet[3186]: I0417 23:39:25.591482 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hxgp\" (UniqueName: \"kubernetes.io/projected/f2c06e2c-b96d-4d45-8b09-d8a9cbb78e3a-kube-api-access-4hxgp\") pod \"calico-typha-f56bb874f-89gft\" (UID: \"f2c06e2c-b96d-4d45-8b09-d8a9cbb78e3a\") " pod="calico-system/calico-typha-f56bb874f-89gft" Apr 17 23:39:25.591594 kubelet[3186]: I0417 23:39:25.591534 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f2c06e2c-b96d-4d45-8b09-d8a9cbb78e3a-typha-certs\") pod \"calico-typha-f56bb874f-89gft\" (UID: \"f2c06e2c-b96d-4d45-8b09-d8a9cbb78e3a\") " pod="calico-system/calico-typha-f56bb874f-89gft" Apr 17 23:39:25.709354 systemd[1]: Created slice kubepods-besteffort-pod2caf62e8_b6aa_4d6d_b79a_5ffd99037c85.slice - libcontainer container kubepods-besteffort-pod2caf62e8_b6aa_4d6d_b79a_5ffd99037c85.slice. Apr 17 23:39:25.795190 kubelet[3186]: I0417 23:39:25.793858 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/2caf62e8-b6aa-4d6d-b79a-5ffd99037c85-nodeproc\") pod \"calico-node-q4dh8\" (UID: \"2caf62e8-b6aa-4d6d-b79a-5ffd99037c85\") " pod="calico-system/calico-node-q4dh8" Apr 17 23:39:25.795190 kubelet[3186]: I0417 23:39:25.793920 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2caf62e8-b6aa-4d6d-b79a-5ffd99037c85-policysync\") pod \"calico-node-q4dh8\" (UID: \"2caf62e8-b6aa-4d6d-b79a-5ffd99037c85\") " pod="calico-system/calico-node-q4dh8" Apr 17 23:39:25.795190 kubelet[3186]: I0417 23:39:25.793950 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/2caf62e8-b6aa-4d6d-b79a-5ffd99037c85-sys-fs\") pod \"calico-node-q4dh8\" (UID: \"2caf62e8-b6aa-4d6d-b79a-5ffd99037c85\") " pod="calico-system/calico-node-q4dh8" Apr 17 23:39:25.795190 kubelet[3186]: I0417 23:39:25.793988 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2caf62e8-b6aa-4d6d-b79a-5ffd99037c85-xtables-lock\") pod \"calico-node-q4dh8\" (UID: \"2caf62e8-b6aa-4d6d-b79a-5ffd99037c85\") " pod="calico-system/calico-node-q4dh8" Apr 17 23:39:25.795190 kubelet[3186]: I0417 23:39:25.794014 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2caf62e8-b6aa-4d6d-b79a-5ffd99037c85-lib-modules\") pod \"calico-node-q4dh8\" (UID: \"2caf62e8-b6aa-4d6d-b79a-5ffd99037c85\") " pod="calico-system/calico-node-q4dh8" Apr 17 23:39:25.795190 kubelet[3186]: I0417 23:39:25.794038 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/2caf62e8-b6aa-4d6d-b79a-5ffd99037c85-bpffs\") pod \"calico-node-q4dh8\" (UID: \"2caf62e8-b6aa-4d6d-b79a-5ffd99037c85\") " pod="calico-system/calico-node-q4dh8" Apr 17 23:39:25.795867 kubelet[3186]: I0417 23:39:25.794075 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2caf62e8-b6aa-4d6d-b79a-5ffd99037c85-cni-bin-dir\") pod \"calico-node-q4dh8\" (UID: \"2caf62e8-b6aa-4d6d-b79a-5ffd99037c85\") " pod="calico-system/calico-node-q4dh8" Apr 17 23:39:25.795867 kubelet[3186]: I0417 23:39:25.794102 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2caf62e8-b6aa-4d6d-b79a-5ffd99037c85-flexvol-driver-host\") pod \"calico-node-q4dh8\" (UID: \"2caf62e8-b6aa-4d6d-b79a-5ffd99037c85\") " pod="calico-system/calico-node-q4dh8" Apr 17 23:39:25.795867 kubelet[3186]: I0417 23:39:25.794153 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2caf62e8-b6aa-4d6d-b79a-5ffd99037c85-var-lib-calico\") pod \"calico-node-q4dh8\" (UID: \"2caf62e8-b6aa-4d6d-b79a-5ffd99037c85\") " pod="calico-system/calico-node-q4dh8" Apr 17 23:39:25.795867 kubelet[3186]: I0417 23:39:25.794182 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2caf62e8-b6aa-4d6d-b79a-5ffd99037c85-cni-log-dir\") pod \"calico-node-q4dh8\" (UID: \"2caf62e8-b6aa-4d6d-b79a-5ffd99037c85\") " pod="calico-system/calico-node-q4dh8" Apr 17 23:39:25.795867 kubelet[3186]: I0417 23:39:25.794643 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28dwx\" (UniqueName: \"kubernetes.io/projected/2caf62e8-b6aa-4d6d-b79a-5ffd99037c85-kube-api-access-28dwx\") pod \"calico-node-q4dh8\" (UID: \"2caf62e8-b6aa-4d6d-b79a-5ffd99037c85\") " pod="calico-system/calico-node-q4dh8" Apr 17 23:39:25.796081 kubelet[3186]: I0417 23:39:25.794712 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2caf62e8-b6aa-4d6d-b79a-5ffd99037c85-var-run-calico\") pod \"calico-node-q4dh8\" (UID: \"2caf62e8-b6aa-4d6d-b79a-5ffd99037c85\") " pod="calico-system/calico-node-q4dh8" Apr 17 23:39:25.796081 kubelet[3186]: I0417 23:39:25.794761 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2caf62e8-b6aa-4d6d-b79a-5ffd99037c85-cni-net-dir\") pod \"calico-node-q4dh8\" (UID: \"2caf62e8-b6aa-4d6d-b79a-5ffd99037c85\") " pod="calico-system/calico-node-q4dh8" Apr 17 23:39:25.796081 kubelet[3186]: I0417 23:39:25.794792 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2caf62e8-b6aa-4d6d-b79a-5ffd99037c85-node-certs\") pod \"calico-node-q4dh8\" (UID: \"2caf62e8-b6aa-4d6d-b79a-5ffd99037c85\") " pod="calico-system/calico-node-q4dh8" Apr 17 23:39:25.796081 kubelet[3186]: I0417 23:39:25.794834 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2caf62e8-b6aa-4d6d-b79a-5ffd99037c85-tigera-ca-bundle\") pod \"calico-node-q4dh8\" (UID: \"2caf62e8-b6aa-4d6d-b79a-5ffd99037c85\") " pod="calico-system/calico-node-q4dh8" Apr 17 23:39:25.803714 kubelet[3186]: E0417 23:39:25.801962 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9fs9x" podUID="a52a9606-2487-4d0a-8d3d-112a3887d0ee" Apr 17 23:39:25.804490 containerd[1726]: time="2026-04-17T23:39:25.804084507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f56bb874f-89gft,Uid:f2c06e2c-b96d-4d45-8b09-d8a9cbb78e3a,Namespace:calico-system,Attempt:0,}" Apr 17 23:39:25.859517 containerd[1726]: time="2026-04-17T23:39:25.859173532Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:39:25.859517 containerd[1726]: time="2026-04-17T23:39:25.859377935Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:39:25.859517 containerd[1726]: time="2026-04-17T23:39:25.859436136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:25.862159 containerd[1726]: time="2026-04-17T23:39:25.862083871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:25.896660 kubelet[3186]: I0417 23:39:25.895557 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a52a9606-2487-4d0a-8d3d-112a3887d0ee-socket-dir\") pod \"csi-node-driver-9fs9x\" (UID: \"a52a9606-2487-4d0a-8d3d-112a3887d0ee\") " pod="calico-system/csi-node-driver-9fs9x" Apr 17 23:39:25.896660 kubelet[3186]: I0417 23:39:25.895603 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a52a9606-2487-4d0a-8d3d-112a3887d0ee-varrun\") pod \"csi-node-driver-9fs9x\" (UID: \"a52a9606-2487-4d0a-8d3d-112a3887d0ee\") " pod="calico-system/csi-node-driver-9fs9x" Apr 17 23:39:25.896660 kubelet[3186]: I0417 23:39:25.895678 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a52a9606-2487-4d0a-8d3d-112a3887d0ee-kubelet-dir\") pod \"csi-node-driver-9fs9x\" (UID: \"a52a9606-2487-4d0a-8d3d-112a3887d0ee\") " pod="calico-system/csi-node-driver-9fs9x" Apr 17 23:39:25.896660 kubelet[3186]: I0417 23:39:25.895702 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a52a9606-2487-4d0a-8d3d-112a3887d0ee-registration-dir\") pod \"csi-node-driver-9fs9x\" (UID: \"a52a9606-2487-4d0a-8d3d-112a3887d0ee\") " pod="calico-system/csi-node-driver-9fs9x" Apr 17 23:39:25.899351 kubelet[3186]: I0417 23:39:25.899042 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grt7v\" (UniqueName: \"kubernetes.io/projected/a52a9606-2487-4d0a-8d3d-112a3887d0ee-kube-api-access-grt7v\") pod \"csi-node-driver-9fs9x\" (UID: \"a52a9606-2487-4d0a-8d3d-112a3887d0ee\") " pod="calico-system/csi-node-driver-9fs9x" Apr 17 23:39:25.902127 systemd[1]: Started cri-containerd-960bfbb42a2a55ca696325056c6fb4a39e426679ddf029bea39af8b14202ea96.scope - libcontainer container 960bfbb42a2a55ca696325056c6fb4a39e426679ddf029bea39af8b14202ea96. Apr 17 23:39:25.903574 kubelet[3186]: E0417 23:39:25.903345 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:25.903574 kubelet[3186]: W0417 23:39:25.903379 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:25.903574 kubelet[3186]: E0417 23:39:25.903402 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:25.904754 kubelet[3186]: E0417 23:39:25.903733 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:25.904754 kubelet[3186]: W0417 23:39:25.903745 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:25.904754 kubelet[3186]: E0417 23:39:25.903785 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:25.913477 kubelet[3186]: E0417 23:39:25.913453 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:25.913743 kubelet[3186]: W0417 23:39:25.913609 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:25.913743 kubelet[3186]: E0417 23:39:25.913708 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:25.951163 containerd[1726]: time="2026-04-17T23:39:25.951127343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f56bb874f-89gft,Uid:f2c06e2c-b96d-4d45-8b09-d8a9cbb78e3a,Namespace:calico-system,Attempt:0,} returns sandbox id \"960bfbb42a2a55ca696325056c6fb4a39e426679ddf029bea39af8b14202ea96\"" Apr 17 23:39:25.952898 containerd[1726]: time="2026-04-17T23:39:25.952846665Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 17 23:39:25.999806 kubelet[3186]: E0417 23:39:25.999771 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:25.999806 kubelet[3186]: W0417 23:39:25.999795 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:26.000167 kubelet[3186]: E0417 23:39:25.999823 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:26.000167 kubelet[3186]: E0417 23:39:26.000111 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:26.000167 kubelet[3186]: W0417 23:39:26.000127 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:26.000291 kubelet[3186]: E0417 23:39:26.000177 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:26.000499 kubelet[3186]: E0417 23:39:26.000474 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:26.000499 kubelet[3186]: W0417 23:39:26.000494 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:26.000672 kubelet[3186]: E0417 23:39:26.000511 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:26.000802 kubelet[3186]: E0417 23:39:26.000779 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:26.000802 kubelet[3186]: W0417 23:39:26.000797 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:26.000802 kubelet[3186]: E0417 23:39:26.000812 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:26.001132 kubelet[3186]: E0417 23:39:26.001112 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:26.001132 kubelet[3186]: W0417 23:39:26.001129 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:26.001262 kubelet[3186]: E0417 23:39:26.001143 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:26.001405 kubelet[3186]: E0417 23:39:26.001372 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:26.001405 kubelet[3186]: W0417 23:39:26.001387 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:26.001405 kubelet[3186]: E0417 23:39:26.001400 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:26.001681 kubelet[3186]: E0417 23:39:26.001628 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:26.001681 kubelet[3186]: W0417 23:39:26.001639 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:26.001681 kubelet[3186]: E0417 23:39:26.001672 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:26.001971 kubelet[3186]: E0417 23:39:26.001908 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:26.001971 kubelet[3186]: W0417 23:39:26.001971 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:26.002083 kubelet[3186]: E0417 23:39:26.001984 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:26.002261 kubelet[3186]: E0417 23:39:26.002242 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:26.002261 kubelet[3186]: W0417 23:39:26.002259 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:26.002519 kubelet[3186]: E0417 23:39:26.002274 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:26.002519 kubelet[3186]: E0417 23:39:26.002484 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:26.002519 kubelet[3186]: W0417 23:39:26.002504 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:26.002519 kubelet[3186]: E0417 23:39:26.002516 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:26.002883 kubelet[3186]: E0417 23:39:26.002865 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:26.002883 kubelet[3186]: W0417 23:39:26.002881 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:26.003007 kubelet[3186]: E0417 23:39:26.002895 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:26.003184 kubelet[3186]: E0417 23:39:26.003167 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:26.003184 kubelet[3186]: W0417 23:39:26.003182 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:26.003371 kubelet[3186]: E0417 23:39:26.003196 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:26.003508 kubelet[3186]: E0417 23:39:26.003490 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:26.003508 kubelet[3186]: W0417 23:39:26.003505 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:26.003689 kubelet[3186]: E0417 23:39:26.003519 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:26.003916 kubelet[3186]: E0417 23:39:26.003785 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:26.003916 kubelet[3186]: W0417 23:39:26.003800 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:26.003916 kubelet[3186]: E0417 23:39:26.003810 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:26.004080 kubelet[3186]: E0417 23:39:26.004054 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:26.004080 kubelet[3186]: W0417 23:39:26.004065 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:26.004164 kubelet[3186]: E0417 23:39:26.004095 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:26.004350 kubelet[3186]: E0417 23:39:26.004331 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:26.004441 kubelet[3186]: W0417 23:39:26.004356 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:26.004441 kubelet[3186]: E0417 23:39:26.004369 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:26.004714 kubelet[3186]: E0417 23:39:26.004646 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:26.004714 kubelet[3186]: W0417 23:39:26.004688 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:26.004714 kubelet[3186]: E0417 23:39:26.004701 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:26.005068 kubelet[3186]: E0417 23:39:26.005051 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:26.005068 kubelet[3186]: W0417 23:39:26.005065 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:26.005228 kubelet[3186]: E0417 23:39:26.005078 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:26.005328 kubelet[3186]: E0417 23:39:26.005311 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:26.005328 kubelet[3186]: W0417 23:39:26.005326 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:26.005486 kubelet[3186]: E0417 23:39:26.005339 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:26.005747 kubelet[3186]: E0417 23:39:26.005594 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:26.005747 kubelet[3186]: W0417 23:39:26.005615 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:26.005747 kubelet[3186]: E0417 23:39:26.005625 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:26.005913 kubelet[3186]: E0417 23:39:26.005880 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:26.005913 kubelet[3186]: W0417 23:39:26.005892 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:26.005913 kubelet[3186]: E0417 23:39:26.005905 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:26.006170 kubelet[3186]: E0417 23:39:26.006153 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:26.006170 kubelet[3186]: W0417 23:39:26.006167 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:26.006393 kubelet[3186]: E0417 23:39:26.006181 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:26.006451 kubelet[3186]: E0417 23:39:26.006391 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:26.006451 kubelet[3186]: W0417 23:39:26.006403 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:26.006451 kubelet[3186]: E0417 23:39:26.006415 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:26.006695 kubelet[3186]: E0417 23:39:26.006678 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:26.006695 kubelet[3186]: W0417 23:39:26.006692 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:26.006791 kubelet[3186]: E0417 23:39:26.006705 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:26.007067 kubelet[3186]: E0417 23:39:26.007016 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:26.007067 kubelet[3186]: W0417 23:39:26.007031 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:26.007067 kubelet[3186]: E0417 23:39:26.007043 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:26.013478 containerd[1726]: time="2026-04-17T23:39:26.013343762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-q4dh8,Uid:2caf62e8-b6aa-4d6d-b79a-5ffd99037c85,Namespace:calico-system,Attempt:0,}" Apr 17 23:39:26.017436 kubelet[3186]: E0417 23:39:26.017411 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:26.017436 kubelet[3186]: W0417 23:39:26.017431 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:26.017548 kubelet[3186]: E0417 23:39:26.017446 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:26.069347 containerd[1726]: time="2026-04-17T23:39:26.068337886Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:39:26.069347 containerd[1726]: time="2026-04-17T23:39:26.069205497Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:39:26.069347 containerd[1726]: time="2026-04-17T23:39:26.069230097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:26.069882 containerd[1726]: time="2026-04-17T23:39:26.069471001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:39:26.087861 systemd[1]: Started cri-containerd-f99de332067f93176d47ffbf6c6ba435be851c1fb99cf8eec512261b64577fa0.scope - libcontainer container f99de332067f93176d47ffbf6c6ba435be851c1fb99cf8eec512261b64577fa0. Apr 17 23:39:26.112522 containerd[1726]: time="2026-04-17T23:39:26.112089362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-q4dh8,Uid:2caf62e8-b6aa-4d6d-b79a-5ffd99037c85,Namespace:calico-system,Attempt:0,} returns sandbox id \"f99de332067f93176d47ffbf6c6ba435be851c1fb99cf8eec512261b64577fa0\"" Apr 17 23:39:26.709671 systemd[1]: run-containerd-runc-k8s.io-960bfbb42a2a55ca696325056c6fb4a39e426679ddf029bea39af8b14202ea96-runc.dRJvSd.mount: Deactivated successfully. Apr 17 23:39:27.427434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount445604591.mount: Deactivated successfully. Apr 17 23:39:27.755588 kubelet[3186]: E0417 23:39:27.754271 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9fs9x" podUID="a52a9606-2487-4d0a-8d3d-112a3887d0ee" Apr 17 23:39:28.591177 containerd[1726]: time="2026-04-17T23:39:28.591110995Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:28.593931 containerd[1726]: time="2026-04-17T23:39:28.593861331Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 17 23:39:28.603277 containerd[1726]: time="2026-04-17T23:39:28.603197454Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:28.608756 containerd[1726]: time="2026-04-17T23:39:28.608682726Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:28.610022 containerd[1726]: time="2026-04-17T23:39:28.609685239Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.656774573s" Apr 17 23:39:28.610022 containerd[1726]: time="2026-04-17T23:39:28.609731740Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 17 23:39:28.610687 containerd[1726]: time="2026-04-17T23:39:28.610646452Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 17 23:39:28.638642 containerd[1726]: time="2026-04-17T23:39:28.638600220Z" level=info msg="CreateContainer within sandbox \"960bfbb42a2a55ca696325056c6fb4a39e426679ddf029bea39af8b14202ea96\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 17 23:39:28.677666 containerd[1726]: time="2026-04-17T23:39:28.677607433Z" level=info msg="CreateContainer within sandbox \"960bfbb42a2a55ca696325056c6fb4a39e426679ddf029bea39af8b14202ea96\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"fe79720b8b6ed4cc44fba4239bb071c50a037c01de62df21d5820dceb1092666\"" Apr 17 23:39:28.678296 containerd[1726]: time="2026-04-17T23:39:28.678168841Z" level=info msg="StartContainer for \"fe79720b8b6ed4cc44fba4239bb071c50a037c01de62df21d5820dceb1092666\"" Apr 17 23:39:28.713822 systemd[1]: Started cri-containerd-fe79720b8b6ed4cc44fba4239bb071c50a037c01de62df21d5820dceb1092666.scope - libcontainer container fe79720b8b6ed4cc44fba4239bb071c50a037c01de62df21d5820dceb1092666. Apr 17 23:39:28.760749 containerd[1726]: time="2026-04-17T23:39:28.760698127Z" level=info msg="StartContainer for \"fe79720b8b6ed4cc44fba4239bb071c50a037c01de62df21d5820dceb1092666\" returns successfully" Apr 17 23:39:28.908706 kubelet[3186]: E0417 23:39:28.908037 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:28.908706 kubelet[3186]: W0417 23:39:28.908106 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:28.908706 kubelet[3186]: E0417 23:39:28.908161 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:28.908706 kubelet[3186]: E0417 23:39:28.908527 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:28.908706 kubelet[3186]: W0417 23:39:28.908542 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:28.911910 kubelet[3186]: E0417 23:39:28.910112 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:28.911910 kubelet[3186]: E0417 23:39:28.911069 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:28.911910 kubelet[3186]: W0417 23:39:28.911086 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:28.911910 kubelet[3186]: E0417 23:39:28.911101 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:28.914079 kubelet[3186]: E0417 23:39:28.913786 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:28.914079 kubelet[3186]: W0417 23:39:28.913801 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:28.914079 kubelet[3186]: E0417 23:39:28.913931 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:28.915683 kubelet[3186]: E0417 23:39:28.914628 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:28.915683 kubelet[3186]: W0417 23:39:28.914643 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:28.915683 kubelet[3186]: E0417 23:39:28.914752 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:28.916800 kubelet[3186]: E0417 23:39:28.916632 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:28.916800 kubelet[3186]: W0417 23:39:28.916647 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:28.916800 kubelet[3186]: E0417 23:39:28.916689 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:28.917427 kubelet[3186]: E0417 23:39:28.917293 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:28.917427 kubelet[3186]: W0417 23:39:28.917307 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:28.917427 kubelet[3186]: E0417 23:39:28.917320 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:28.917590 kubelet[3186]: E0417 23:39:28.917550 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:28.917590 kubelet[3186]: W0417 23:39:28.917562 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:28.917590 kubelet[3186]: E0417 23:39:28.917575 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:28.920182 kubelet[3186]: E0417 23:39:28.920151 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:28.920182 kubelet[3186]: W0417 23:39:28.920171 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:28.920328 kubelet[3186]: E0417 23:39:28.920186 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:28.920480 kubelet[3186]: E0417 23:39:28.920385 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:28.920480 kubelet[3186]: W0417 23:39:28.920400 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:28.920480 kubelet[3186]: E0417 23:39:28.920413 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:28.920891 kubelet[3186]: E0417 23:39:28.920613 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:28.920891 kubelet[3186]: W0417 23:39:28.920624 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:28.920891 kubelet[3186]: E0417 23:39:28.920638 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:28.920891 kubelet[3186]: E0417 23:39:28.920855 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:28.920891 kubelet[3186]: W0417 23:39:28.920866 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:28.920891 kubelet[3186]: E0417 23:39:28.920879 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:28.921751 kubelet[3186]: E0417 23:39:28.921725 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:28.921751 kubelet[3186]: W0417 23:39:28.921744 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:28.921872 kubelet[3186]: E0417 23:39:28.921758 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:28.923940 kubelet[3186]: E0417 23:39:28.923919 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:28.923940 kubelet[3186]: W0417 23:39:28.923938 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:28.924073 kubelet[3186]: E0417 23:39:28.923953 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:28.924180 kubelet[3186]: E0417 23:39:28.924163 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:28.924234 kubelet[3186]: W0417 23:39:28.924180 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:28.924234 kubelet[3186]: E0417 23:39:28.924193 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:28.926621 kubelet[3186]: E0417 23:39:28.926602 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:28.926621 kubelet[3186]: W0417 23:39:28.926620 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:28.926891 kubelet[3186]: E0417 23:39:28.926635 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:28.926945 kubelet[3186]: E0417 23:39:28.926904 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:28.926945 kubelet[3186]: W0417 23:39:28.926915 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:28.926945 kubelet[3186]: E0417 23:39:28.926927 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:28.927215 kubelet[3186]: E0417 23:39:28.927195 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:28.927215 kubelet[3186]: W0417 23:39:28.927215 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:28.927333 kubelet[3186]: E0417 23:39:28.927228 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:28.928777 kubelet[3186]: E0417 23:39:28.928755 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:28.928777 kubelet[3186]: W0417 23:39:28.928774 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:28.928901 kubelet[3186]: E0417 23:39:28.928790 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:28.929133 kubelet[3186]: E0417 23:39:28.929032 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:28.929133 kubelet[3186]: W0417 23:39:28.929047 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:28.929133 kubelet[3186]: E0417 23:39:28.929067 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:28.929398 kubelet[3186]: E0417 23:39:28.929310 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:28.929398 kubelet[3186]: W0417 23:39:28.929323 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:28.929398 kubelet[3186]: E0417 23:39:28.929336 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:28.929705 kubelet[3186]: E0417 23:39:28.929585 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:28.929705 kubelet[3186]: W0417 23:39:28.929601 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:28.929705 kubelet[3186]: E0417 23:39:28.929616 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:28.931378 kubelet[3186]: E0417 23:39:28.931359 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:28.931378 kubelet[3186]: W0417 23:39:28.931377 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:28.931497 kubelet[3186]: E0417 23:39:28.931391 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:28.932498 kubelet[3186]: E0417 23:39:28.931724 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:28.932498 kubelet[3186]: W0417 23:39:28.931739 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:28.932498 kubelet[3186]: E0417 23:39:28.931754 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:28.932498 kubelet[3186]: E0417 23:39:28.932146 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:28.932498 kubelet[3186]: W0417 23:39:28.932158 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:28.932498 kubelet[3186]: E0417 23:39:28.932172 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:28.932498 kubelet[3186]: E0417 23:39:28.932396 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:28.932498 kubelet[3186]: W0417 23:39:28.932406 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:28.932498 kubelet[3186]: E0417 23:39:28.932419 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:28.933108 kubelet[3186]: E0417 23:39:28.933009 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:28.933108 kubelet[3186]: W0417 23:39:28.933024 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:28.933108 kubelet[3186]: E0417 23:39:28.933038 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:28.934441 kubelet[3186]: E0417 23:39:28.934417 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:28.934441 kubelet[3186]: W0417 23:39:28.934437 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:28.934566 kubelet[3186]: E0417 23:39:28.934452 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:28.934761 kubelet[3186]: E0417 23:39:28.934740 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:28.934761 kubelet[3186]: W0417 23:39:28.934758 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:28.934885 kubelet[3186]: E0417 23:39:28.934773 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:28.937870 kubelet[3186]: E0417 23:39:28.937850 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:28.937870 kubelet[3186]: W0417 23:39:28.937869 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:28.938010 kubelet[3186]: E0417 23:39:28.937883 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:28.938446 kubelet[3186]: E0417 23:39:28.938427 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:28.938446 kubelet[3186]: W0417 23:39:28.938445 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:28.938585 kubelet[3186]: E0417 23:39:28.938461 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:28.939017 kubelet[3186]: E0417 23:39:28.938958 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:28.939017 kubelet[3186]: W0417 23:39:28.938973 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:28.939017 kubelet[3186]: E0417 23:39:28.938987 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:28.940058 kubelet[3186]: E0417 23:39:28.939934 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:28.940058 kubelet[3186]: W0417 23:39:28.939950 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:28.940058 kubelet[3186]: E0417 23:39:28.939963 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:29.753993 kubelet[3186]: E0417 23:39:29.753936 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9fs9x" podUID="a52a9606-2487-4d0a-8d3d-112a3887d0ee" Apr 17 23:39:29.891821 kubelet[3186]: I0417 23:39:29.891774 3186 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:39:29.930842 kubelet[3186]: E0417 23:39:29.930805 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:29.930842 kubelet[3186]: W0417 23:39:29.930835 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:29.931384 kubelet[3186]: E0417 23:39:29.930861 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:29.931384 kubelet[3186]: E0417 23:39:29.931131 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:29.931384 kubelet[3186]: W0417 23:39:29.931144 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:29.931384 kubelet[3186]: E0417 23:39:29.931161 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:29.931384 kubelet[3186]: E0417 23:39:29.931374 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:29.931384 kubelet[3186]: W0417 23:39:29.931385 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:29.931728 kubelet[3186]: E0417 23:39:29.931398 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:29.931845 kubelet[3186]: E0417 23:39:29.931826 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:29.931905 kubelet[3186]: W0417 23:39:29.931845 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:29.931905 kubelet[3186]: E0417 23:39:29.931863 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:29.932432 kubelet[3186]: E0417 23:39:29.932411 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:29.932432 kubelet[3186]: W0417 23:39:29.932429 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:29.932561 kubelet[3186]: E0417 23:39:29.932442 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:29.933364 kubelet[3186]: E0417 23:39:29.933340 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:29.933364 kubelet[3186]: W0417 23:39:29.933360 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:29.933505 kubelet[3186]: E0417 23:39:29.933374 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:29.933643 kubelet[3186]: E0417 23:39:29.933626 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:29.933643 kubelet[3186]: W0417 23:39:29.933642 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:29.933767 kubelet[3186]: E0417 23:39:29.933706 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:29.934812 kubelet[3186]: E0417 23:39:29.934791 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:29.934812 kubelet[3186]: W0417 23:39:29.934810 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:29.934931 kubelet[3186]: E0417 23:39:29.934824 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:29.936726 kubelet[3186]: E0417 23:39:29.936706 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:29.936726 kubelet[3186]: W0417 23:39:29.936725 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:29.936920 kubelet[3186]: E0417 23:39:29.936739 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:29.937009 kubelet[3186]: E0417 23:39:29.936988 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:29.937009 kubelet[3186]: W0417 23:39:29.937002 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:29.937095 kubelet[3186]: E0417 23:39:29.937017 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:29.937545 kubelet[3186]: E0417 23:39:29.937208 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:29.937545 kubelet[3186]: W0417 23:39:29.937220 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:29.937545 kubelet[3186]: E0417 23:39:29.937230 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:29.937545 kubelet[3186]: E0417 23:39:29.937421 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:29.937545 kubelet[3186]: W0417 23:39:29.937431 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:29.937545 kubelet[3186]: E0417 23:39:29.937442 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:29.937885 kubelet[3186]: E0417 23:39:29.937730 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:29.937885 kubelet[3186]: W0417 23:39:29.937741 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:29.937885 kubelet[3186]: E0417 23:39:29.937753 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:29.939712 kubelet[3186]: E0417 23:39:29.939693 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:29.939712 kubelet[3186]: W0417 23:39:29.939710 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:29.939852 kubelet[3186]: E0417 23:39:29.939724 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:29.940043 kubelet[3186]: E0417 23:39:29.940019 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:29.940043 kubelet[3186]: W0417 23:39:29.940034 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:29.940147 kubelet[3186]: E0417 23:39:29.940047 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:29.941849 kubelet[3186]: E0417 23:39:29.940379 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:29.941849 kubelet[3186]: W0417 23:39:29.940390 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:29.941849 kubelet[3186]: E0417 23:39:29.940402 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:29.941849 kubelet[3186]: E0417 23:39:29.940640 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:29.941849 kubelet[3186]: W0417 23:39:29.940679 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:29.941849 kubelet[3186]: E0417 23:39:29.940692 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:29.941849 kubelet[3186]: E0417 23:39:29.940921 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:29.941849 kubelet[3186]: W0417 23:39:29.940930 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:29.941849 kubelet[3186]: E0417 23:39:29.940941 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:29.941849 kubelet[3186]: E0417 23:39:29.941599 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:29.942500 kubelet[3186]: W0417 23:39:29.941611 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:29.942500 kubelet[3186]: E0417 23:39:29.941625 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:29.942500 kubelet[3186]: E0417 23:39:29.942230 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:29.942500 kubelet[3186]: W0417 23:39:29.942246 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:29.942500 kubelet[3186]: E0417 23:39:29.942261 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:29.943765 kubelet[3186]: E0417 23:39:29.943734 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:29.943765 kubelet[3186]: W0417 23:39:29.943752 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:29.943765 kubelet[3186]: E0417 23:39:29.943766 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:29.944186 kubelet[3186]: E0417 23:39:29.944000 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:29.944186 kubelet[3186]: W0417 23:39:29.944011 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:29.944186 kubelet[3186]: E0417 23:39:29.944024 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:29.944321 kubelet[3186]: E0417 23:39:29.944266 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:29.944321 kubelet[3186]: W0417 23:39:29.944277 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:29.944321 kubelet[3186]: E0417 23:39:29.944289 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:29.944846 kubelet[3186]: E0417 23:39:29.944826 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:29.944846 kubelet[3186]: W0417 23:39:29.944845 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:29.945070 kubelet[3186]: E0417 23:39:29.944858 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:29.946729 kubelet[3186]: E0417 23:39:29.946709 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:29.946729 kubelet[3186]: W0417 23:39:29.946726 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:29.946861 kubelet[3186]: E0417 23:39:29.946741 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:29.948947 kubelet[3186]: E0417 23:39:29.947169 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:29.948947 kubelet[3186]: W0417 23:39:29.947183 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:29.948947 kubelet[3186]: E0417 23:39:29.947195 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:29.949303 kubelet[3186]: E0417 23:39:29.949173 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:29.949303 kubelet[3186]: W0417 23:39:29.949190 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:29.949303 kubelet[3186]: E0417 23:39:29.949204 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:29.950077 kubelet[3186]: E0417 23:39:29.949965 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:29.950077 kubelet[3186]: W0417 23:39:29.949980 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:29.950077 kubelet[3186]: E0417 23:39:29.949994 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:29.953016 kubelet[3186]: E0417 23:39:29.952838 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:29.953016 kubelet[3186]: W0417 23:39:29.952852 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:29.953358 kubelet[3186]: E0417 23:39:29.953205 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:29.955038 kubelet[3186]: E0417 23:39:29.955002 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:29.955038 kubelet[3186]: W0417 23:39:29.955016 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:29.955290 kubelet[3186]: E0417 23:39:29.955150 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:29.955719 kubelet[3186]: E0417 23:39:29.955705 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:29.956081 kubelet[3186]: W0417 23:39:29.955787 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:29.956081 kubelet[3186]: E0417 23:39:29.955803 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:29.956231 kubelet[3186]: E0417 23:39:29.956201 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:29.956231 kubelet[3186]: W0417 23:39:29.956213 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:29.956231 kubelet[3186]: E0417 23:39:29.956226 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:29.956442 kubelet[3186]: E0417 23:39:29.956426 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:39:29.956442 kubelet[3186]: W0417 23:39:29.956442 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:39:29.956531 kubelet[3186]: E0417 23:39:29.956455 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:39:30.218441 containerd[1726]: time="2026-04-17T23:39:30.218384708Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:30.221237 containerd[1726]: time="2026-04-17T23:39:30.221172334Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 17 23:39:30.224340 containerd[1726]: time="2026-04-17T23:39:30.224288263Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:30.228733 containerd[1726]: time="2026-04-17T23:39:30.228681904Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:30.229922 containerd[1726]: time="2026-04-17T23:39:30.229340511Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.618516056s" Apr 17 23:39:30.229922 containerd[1726]: time="2026-04-17T23:39:30.229382511Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 17 23:39:30.239600 containerd[1726]: time="2026-04-17T23:39:30.239563006Z" level=info msg="CreateContainer within sandbox \"f99de332067f93176d47ffbf6c6ba435be851c1fb99cf8eec512261b64577fa0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 17 23:39:30.292571 containerd[1726]: time="2026-04-17T23:39:30.292428903Z" level=info msg="CreateContainer within sandbox \"f99de332067f93176d47ffbf6c6ba435be851c1fb99cf8eec512261b64577fa0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7fad3ffc4ae40943eb48b1958adb7fcfcb11c0d7820814cefcd7bb60d5e56518\"" Apr 17 23:39:30.293236 containerd[1726]: time="2026-04-17T23:39:30.293148509Z" level=info msg="StartContainer for \"7fad3ffc4ae40943eb48b1958adb7fcfcb11c0d7820814cefcd7bb60d5e56518\"" Apr 17 23:39:30.322734 systemd[1]: run-containerd-runc-k8s.io-7fad3ffc4ae40943eb48b1958adb7fcfcb11c0d7820814cefcd7bb60d5e56518-runc.l74H2T.mount: Deactivated successfully. Apr 17 23:39:30.330919 systemd[1]: Started cri-containerd-7fad3ffc4ae40943eb48b1958adb7fcfcb11c0d7820814cefcd7bb60d5e56518.scope - libcontainer container 7fad3ffc4ae40943eb48b1958adb7fcfcb11c0d7820814cefcd7bb60d5e56518. Apr 17 23:39:30.363327 containerd[1726]: time="2026-04-17T23:39:30.363281068Z" level=info msg="StartContainer for \"7fad3ffc4ae40943eb48b1958adb7fcfcb11c0d7820814cefcd7bb60d5e56518\" returns successfully" Apr 17 23:39:30.372901 systemd[1]: cri-containerd-7fad3ffc4ae40943eb48b1958adb7fcfcb11c0d7820814cefcd7bb60d5e56518.scope: Deactivated successfully. Apr 17 23:39:30.616543 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7fad3ffc4ae40943eb48b1958adb7fcfcb11c0d7820814cefcd7bb60d5e56518-rootfs.mount: Deactivated successfully. Apr 17 23:39:31.492600 kubelet[3186]: I0417 23:39:30.914086 3186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-f56bb874f-89gft" podStartSLOduration=3.256071548 podStartE2EDuration="5.914067337s" podCreationTimestamp="2026-04-17 23:39:25 +0000 UTC" firstStartedPulling="2026-04-17 23:39:25.952521761 +0000 UTC m=+24.334358121" lastFinishedPulling="2026-04-17 23:39:28.61051765 +0000 UTC m=+26.992353910" observedRunningTime="2026-04-17 23:39:28.949583114 +0000 UTC m=+27.331419474" watchObservedRunningTime="2026-04-17 23:39:30.914067337 +0000 UTC m=+29.295903597" Apr 17 23:39:31.754828 kubelet[3186]: E0417 23:39:31.753962 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9fs9x" podUID="a52a9606-2487-4d0a-8d3d-112a3887d0ee" Apr 17 23:39:33.754912 kubelet[3186]: E0417 23:39:33.753604 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9fs9x" podUID="a52a9606-2487-4d0a-8d3d-112a3887d0ee" Apr 17 23:39:35.755478 kubelet[3186]: E0417 23:39:35.754305 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9fs9x" podUID="a52a9606-2487-4d0a-8d3d-112a3887d0ee" Apr 17 23:39:35.806950 containerd[1726]: time="2026-04-17T23:39:35.806858974Z" level=info msg="shim disconnected" id=7fad3ffc4ae40943eb48b1958adb7fcfcb11c0d7820814cefcd7bb60d5e56518 namespace=k8s.io Apr 17 23:39:35.806950 containerd[1726]: time="2026-04-17T23:39:35.806937075Z" level=warning msg="cleaning up after shim disconnected" id=7fad3ffc4ae40943eb48b1958adb7fcfcb11c0d7820814cefcd7bb60d5e56518 namespace=k8s.io Apr 17 23:39:35.806950 containerd[1726]: time="2026-04-17T23:39:35.806950275Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:39:35.906680 containerd[1726]: time="2026-04-17T23:39:35.906603307Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 17 23:39:37.756392 kubelet[3186]: E0417 23:39:37.754986 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9fs9x" podUID="a52a9606-2487-4d0a-8d3d-112a3887d0ee" Apr 17 23:39:38.111786 kubelet[3186]: I0417 23:39:38.111517 3186 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:39:39.754429 kubelet[3186]: E0417 23:39:39.753952 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9fs9x" podUID="a52a9606-2487-4d0a-8d3d-112a3887d0ee" Apr 17 23:39:41.754119 kubelet[3186]: E0417 23:39:41.753734 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9fs9x" podUID="a52a9606-2487-4d0a-8d3d-112a3887d0ee" Apr 17 23:39:43.754997 kubelet[3186]: E0417 23:39:43.753826 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9fs9x" podUID="a52a9606-2487-4d0a-8d3d-112a3887d0ee" Apr 17 23:39:45.754205 kubelet[3186]: E0417 23:39:45.754151 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9fs9x" podUID="a52a9606-2487-4d0a-8d3d-112a3887d0ee" Apr 17 23:39:47.754234 kubelet[3186]: E0417 23:39:47.753360 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9fs9x" podUID="a52a9606-2487-4d0a-8d3d-112a3887d0ee" Apr 17 23:39:48.542683 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1845734476.mount: Deactivated successfully. Apr 17 23:39:48.579912 containerd[1726]: time="2026-04-17T23:39:48.579858959Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:48.582737 containerd[1726]: time="2026-04-17T23:39:48.582690485Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 17 23:39:48.586838 containerd[1726]: time="2026-04-17T23:39:48.586782023Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:48.591738 containerd[1726]: time="2026-04-17T23:39:48.591686868Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:48.592581 containerd[1726]: time="2026-04-17T23:39:48.592344574Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 12.685692167s" Apr 17 23:39:48.592581 containerd[1726]: time="2026-04-17T23:39:48.592385274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 17 23:39:48.599914 containerd[1726]: time="2026-04-17T23:39:48.599882443Z" level=info msg="CreateContainer within sandbox \"f99de332067f93176d47ffbf6c6ba435be851c1fb99cf8eec512261b64577fa0\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 17 23:39:48.639691 containerd[1726]: time="2026-04-17T23:39:48.639632110Z" level=info msg="CreateContainer within sandbox \"f99de332067f93176d47ffbf6c6ba435be851c1fb99cf8eec512261b64577fa0\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"adc8ca0a6dfbdb321e610b342d772a568c99a93214ed93dd882bd8b5f9c96ffe\"" Apr 17 23:39:48.640289 containerd[1726]: time="2026-04-17T23:39:48.640253016Z" level=info msg="StartContainer for \"adc8ca0a6dfbdb321e610b342d772a568c99a93214ed93dd882bd8b5f9c96ffe\"" Apr 17 23:39:48.678825 systemd[1]: Started cri-containerd-adc8ca0a6dfbdb321e610b342d772a568c99a93214ed93dd882bd8b5f9c96ffe.scope - libcontainer container adc8ca0a6dfbdb321e610b342d772a568c99a93214ed93dd882bd8b5f9c96ffe. Apr 17 23:39:48.710583 containerd[1726]: time="2026-04-17T23:39:48.710493264Z" level=info msg="StartContainer for \"adc8ca0a6dfbdb321e610b342d772a568c99a93214ed93dd882bd8b5f9c96ffe\" returns successfully" Apr 17 23:39:48.746332 systemd[1]: cri-containerd-adc8ca0a6dfbdb321e610b342d772a568c99a93214ed93dd882bd8b5f9c96ffe.scope: Deactivated successfully. Apr 17 23:39:49.538770 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-adc8ca0a6dfbdb321e610b342d772a568c99a93214ed93dd882bd8b5f9c96ffe-rootfs.mount: Deactivated successfully. Apr 17 23:39:49.753882 kubelet[3186]: E0417 23:39:49.753442 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9fs9x" podUID="a52a9606-2487-4d0a-8d3d-112a3887d0ee" Apr 17 23:39:51.757686 kubelet[3186]: E0417 23:39:51.756298 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9fs9x" podUID="a52a9606-2487-4d0a-8d3d-112a3887d0ee" Apr 17 23:39:52.299532 containerd[1726]: time="2026-04-17T23:39:52.299458686Z" level=info msg="shim disconnected" id=adc8ca0a6dfbdb321e610b342d772a568c99a93214ed93dd882bd8b5f9c96ffe namespace=k8s.io Apr 17 23:39:52.299532 containerd[1726]: time="2026-04-17T23:39:52.299521587Z" level=warning msg="cleaning up after shim disconnected" id=adc8ca0a6dfbdb321e610b342d772a568c99a93214ed93dd882bd8b5f9c96ffe namespace=k8s.io Apr 17 23:39:52.299532 containerd[1726]: time="2026-04-17T23:39:52.299533287Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:39:52.954252 containerd[1726]: time="2026-04-17T23:39:52.954207829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 17 23:39:53.754570 kubelet[3186]: E0417 23:39:53.754080 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9fs9x" podUID="a52a9606-2487-4d0a-8d3d-112a3887d0ee" Apr 17 23:39:55.755902 kubelet[3186]: E0417 23:39:55.755861 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9fs9x" podUID="a52a9606-2487-4d0a-8d3d-112a3887d0ee" Apr 17 23:39:57.755541 kubelet[3186]: E0417 23:39:57.753684 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9fs9x" podUID="a52a9606-2487-4d0a-8d3d-112a3887d0ee" Apr 17 23:39:59.033356 containerd[1726]: time="2026-04-17T23:39:59.033294657Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:59.036533 containerd[1726]: time="2026-04-17T23:39:59.036355888Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 17 23:39:59.039528 containerd[1726]: time="2026-04-17T23:39:59.039454820Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:59.048033 containerd[1726]: time="2026-04-17T23:39:59.047968406Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:39:59.048841 containerd[1726]: time="2026-04-17T23:39:59.048796414Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 6.094538585s" Apr 17 23:39:59.049185 containerd[1726]: time="2026-04-17T23:39:59.048969716Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 17 23:39:59.057220 containerd[1726]: time="2026-04-17T23:39:59.057178899Z" level=info msg="CreateContainer within sandbox \"f99de332067f93176d47ffbf6c6ba435be851c1fb99cf8eec512261b64577fa0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 17 23:39:59.096096 containerd[1726]: time="2026-04-17T23:39:59.096054493Z" level=info msg="CreateContainer within sandbox \"f99de332067f93176d47ffbf6c6ba435be851c1fb99cf8eec512261b64577fa0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b79e8a52a9b3cb77ca52610c6e5fe19850ce0795a408368e714dd4d61bfbb766\"" Apr 17 23:39:59.096578 containerd[1726]: time="2026-04-17T23:39:59.096547898Z" level=info msg="StartContainer for \"b79e8a52a9b3cb77ca52610c6e5fe19850ce0795a408368e714dd4d61bfbb766\"" Apr 17 23:39:59.130812 systemd[1]: Started cri-containerd-b79e8a52a9b3cb77ca52610c6e5fe19850ce0795a408368e714dd4d61bfbb766.scope - libcontainer container b79e8a52a9b3cb77ca52610c6e5fe19850ce0795a408368e714dd4d61bfbb766. Apr 17 23:39:59.160233 containerd[1726]: time="2026-04-17T23:39:59.160177643Z" level=info msg="StartContainer for \"b79e8a52a9b3cb77ca52610c6e5fe19850ce0795a408368e714dd4d61bfbb766\" returns successfully" Apr 17 23:39:59.754704 kubelet[3186]: E0417 23:39:59.753949 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9fs9x" podUID="a52a9606-2487-4d0a-8d3d-112a3887d0ee" Apr 17 23:40:01.755070 kubelet[3186]: E0417 23:40:01.754386 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9fs9x" podUID="a52a9606-2487-4d0a-8d3d-112a3887d0ee" Apr 17 23:40:03.754605 kubelet[3186]: E0417 23:40:03.754030 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9fs9x" podUID="a52a9606-2487-4d0a-8d3d-112a3887d0ee" Apr 17 23:40:05.754701 kubelet[3186]: E0417 23:40:05.753445 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9fs9x" podUID="a52a9606-2487-4d0a-8d3d-112a3887d0ee" Apr 17 23:40:07.156551 containerd[1726]: time="2026-04-17T23:40:07.156489549Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 23:40:07.159520 systemd[1]: cri-containerd-b79e8a52a9b3cb77ca52610c6e5fe19850ce0795a408368e714dd4d61bfbb766.scope: Deactivated successfully. Apr 17 23:40:07.185272 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b79e8a52a9b3cb77ca52610c6e5fe19850ce0795a408368e714dd4d61bfbb766-rootfs.mount: Deactivated successfully. Apr 17 23:40:07.231438 kubelet[3186]: I0417 23:40:07.229494 3186 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 17 23:40:11.843832 containerd[1726]: time="2026-04-17T23:40:11.843761668Z" level=info msg="shim disconnected" id=b79e8a52a9b3cb77ca52610c6e5fe19850ce0795a408368e714dd4d61bfbb766 namespace=k8s.io Apr 17 23:40:11.843832 containerd[1726]: time="2026-04-17T23:40:11.843830669Z" level=warning msg="cleaning up after shim disconnected" id=b79e8a52a9b3cb77ca52610c6e5fe19850ce0795a408368e714dd4d61bfbb766 namespace=k8s.io Apr 17 23:40:11.844309 containerd[1726]: time="2026-04-17T23:40:11.843845669Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:40:11.854138 systemd[1]: Created slice kubepods-besteffort-pod6881ecfb_ba07_4f07_9908_f7af5cb84913.slice - libcontainer container kubepods-besteffort-pod6881ecfb_ba07_4f07_9908_f7af5cb84913.slice. Apr 17 23:40:11.870975 systemd[1]: Created slice kubepods-besteffort-poda52a9606_2487_4d0a_8d3d_112a3887d0ee.slice - libcontainer container kubepods-besteffort-poda52a9606_2487_4d0a_8d3d_112a3887d0ee.slice. Apr 17 23:40:11.881694 containerd[1726]: time="2026-04-17T23:40:11.881028245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9fs9x,Uid:a52a9606-2487-4d0a-8d3d-112a3887d0ee,Namespace:calico-system,Attempt:0,}" Apr 17 23:40:11.888081 systemd[1]: Created slice kubepods-besteffort-podbfb4688d_986d_4ce5_9339_fe3b9c2a1572.slice - libcontainer container kubepods-besteffort-podbfb4688d_986d_4ce5_9339_fe3b9c2a1572.slice. Apr 17 23:40:11.898910 systemd[1]: Created slice kubepods-burstable-podd20ead5d_6097_4ec2_95c7_cb2c778d9ef9.slice - libcontainer container kubepods-burstable-podd20ead5d_6097_4ec2_95c7_cb2c778d9ef9.slice. Apr 17 23:40:11.907974 systemd[1]: Created slice kubepods-burstable-pod95fefded_a482_4bce_a706_4f16bcb76d2b.slice - libcontainer container kubepods-burstable-pod95fefded_a482_4bce_a706_4f16bcb76d2b.slice. Apr 17 23:40:11.914838 systemd[1]: Created slice kubepods-besteffort-pod89036b56_0e46_4839_b92f_5b8cf483ee20.slice - libcontainer container kubepods-besteffort-pod89036b56_0e46_4839_b92f_5b8cf483ee20.slice. Apr 17 23:40:11.929157 kubelet[3186]: I0417 23:40:11.927052 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7w45\" (UniqueName: \"kubernetes.io/projected/95fefded-a482-4bce-a706-4f16bcb76d2b-kube-api-access-w7w45\") pod \"coredns-674b8bbfcf-sbk8d\" (UID: \"95fefded-a482-4bce-a706-4f16bcb76d2b\") " pod="kube-system/coredns-674b8bbfcf-sbk8d" Apr 17 23:40:11.929157 kubelet[3186]: I0417 23:40:11.927095 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/89036b56-0e46-4839-b92f-5b8cf483ee20-goldmane-key-pair\") pod \"goldmane-5b85766d88-lchr8\" (UID: \"89036b56-0e46-4839-b92f-5b8cf483ee20\") " pod="calico-system/goldmane-5b85766d88-lchr8" Apr 17 23:40:11.929157 kubelet[3186]: I0417 23:40:11.927117 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgmkr\" (UniqueName: \"kubernetes.io/projected/6881ecfb-ba07-4f07-9908-f7af5cb84913-kube-api-access-vgmkr\") pod \"whisker-596447fcf-5kzpg\" (UID: \"6881ecfb-ba07-4f07-9908-f7af5cb84913\") " pod="calico-system/whisker-596447fcf-5kzpg" Apr 17 23:40:11.929157 kubelet[3186]: I0417 23:40:11.927139 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bfb4688d-986d-4ce5-9339-fe3b9c2a1572-calico-apiserver-certs\") pod \"calico-apiserver-67d9f5f86b-j8d8z\" (UID: \"bfb4688d-986d-4ce5-9339-fe3b9c2a1572\") " pod="calico-system/calico-apiserver-67d9f5f86b-j8d8z" Apr 17 23:40:11.929157 kubelet[3186]: I0417 23:40:11.927161 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fs5p\" (UniqueName: \"kubernetes.io/projected/bfb4688d-986d-4ce5-9339-fe3b9c2a1572-kube-api-access-9fs5p\") pod \"calico-apiserver-67d9f5f86b-j8d8z\" (UID: \"bfb4688d-986d-4ce5-9339-fe3b9c2a1572\") " pod="calico-system/calico-apiserver-67d9f5f86b-j8d8z" Apr 17 23:40:11.930192 kubelet[3186]: I0417 23:40:11.927194 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95fefded-a482-4bce-a706-4f16bcb76d2b-config-volume\") pod \"coredns-674b8bbfcf-sbk8d\" (UID: \"95fefded-a482-4bce-a706-4f16bcb76d2b\") " pod="kube-system/coredns-674b8bbfcf-sbk8d" Apr 17 23:40:11.930192 kubelet[3186]: I0417 23:40:11.927231 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nszw4\" (UniqueName: \"kubernetes.io/projected/d20ead5d-6097-4ec2-95c7-cb2c778d9ef9-kube-api-access-nszw4\") pod \"coredns-674b8bbfcf-6snhn\" (UID: \"d20ead5d-6097-4ec2-95c7-cb2c778d9ef9\") " pod="kube-system/coredns-674b8bbfcf-6snhn" Apr 17 23:40:11.930192 kubelet[3186]: I0417 23:40:11.927253 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/17cb0cd8-e168-42c7-8bf5-bf6f746d1982-calico-apiserver-certs\") pod \"calico-apiserver-67d9f5f86b-vmsph\" (UID: \"17cb0cd8-e168-42c7-8bf5-bf6f746d1982\") " pod="calico-system/calico-apiserver-67d9f5f86b-vmsph" Apr 17 23:40:11.930192 kubelet[3186]: I0417 23:40:11.927276 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glj8j\" (UniqueName: \"kubernetes.io/projected/17cb0cd8-e168-42c7-8bf5-bf6f746d1982-kube-api-access-glj8j\") pod \"calico-apiserver-67d9f5f86b-vmsph\" (UID: \"17cb0cd8-e168-42c7-8bf5-bf6f746d1982\") " pod="calico-system/calico-apiserver-67d9f5f86b-vmsph" Apr 17 23:40:11.930192 kubelet[3186]: I0417 23:40:11.927299 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89036b56-0e46-4839-b92f-5b8cf483ee20-config\") pod \"goldmane-5b85766d88-lchr8\" (UID: \"89036b56-0e46-4839-b92f-5b8cf483ee20\") " pod="calico-system/goldmane-5b85766d88-lchr8" Apr 17 23:40:11.931906 kubelet[3186]: I0417 23:40:11.927321 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/6881ecfb-ba07-4f07-9908-f7af5cb84913-nginx-config\") pod \"whisker-596447fcf-5kzpg\" (UID: \"6881ecfb-ba07-4f07-9908-f7af5cb84913\") " pod="calico-system/whisker-596447fcf-5kzpg" Apr 17 23:40:11.931906 kubelet[3186]: I0417 23:40:11.927342 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d20ead5d-6097-4ec2-95c7-cb2c778d9ef9-config-volume\") pod \"coredns-674b8bbfcf-6snhn\" (UID: \"d20ead5d-6097-4ec2-95c7-cb2c778d9ef9\") " pod="kube-system/coredns-674b8bbfcf-6snhn" Apr 17 23:40:11.931906 kubelet[3186]: I0417 23:40:11.927366 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkr26\" (UniqueName: \"kubernetes.io/projected/4b1dbce2-5f0d-4778-a7a7-f0ca0b4da675-kube-api-access-zkr26\") pod \"calico-kube-controllers-759d59f7d9-2dqb9\" (UID: \"4b1dbce2-5f0d-4778-a7a7-f0ca0b4da675\") " pod="calico-system/calico-kube-controllers-759d59f7d9-2dqb9" Apr 17 23:40:11.931906 kubelet[3186]: I0417 23:40:11.927391 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89036b56-0e46-4839-b92f-5b8cf483ee20-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-lchr8\" (UID: \"89036b56-0e46-4839-b92f-5b8cf483ee20\") " pod="calico-system/goldmane-5b85766d88-lchr8" Apr 17 23:40:11.931906 kubelet[3186]: I0417 23:40:11.927449 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6881ecfb-ba07-4f07-9908-f7af5cb84913-whisker-backend-key-pair\") pod \"whisker-596447fcf-5kzpg\" (UID: \"6881ecfb-ba07-4f07-9908-f7af5cb84913\") " pod="calico-system/whisker-596447fcf-5kzpg" Apr 17 23:40:11.932185 kubelet[3186]: I0417 23:40:11.927477 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6881ecfb-ba07-4f07-9908-f7af5cb84913-whisker-ca-bundle\") pod \"whisker-596447fcf-5kzpg\" (UID: \"6881ecfb-ba07-4f07-9908-f7af5cb84913\") " pod="calico-system/whisker-596447fcf-5kzpg" Apr 17 23:40:11.932185 kubelet[3186]: I0417 23:40:11.927529 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nxqq\" (UniqueName: \"kubernetes.io/projected/89036b56-0e46-4839-b92f-5b8cf483ee20-kube-api-access-7nxqq\") pod \"goldmane-5b85766d88-lchr8\" (UID: \"89036b56-0e46-4839-b92f-5b8cf483ee20\") " pod="calico-system/goldmane-5b85766d88-lchr8" Apr 17 23:40:11.932185 kubelet[3186]: I0417 23:40:11.927553 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b1dbce2-5f0d-4778-a7a7-f0ca0b4da675-tigera-ca-bundle\") pod \"calico-kube-controllers-759d59f7d9-2dqb9\" (UID: \"4b1dbce2-5f0d-4778-a7a7-f0ca0b4da675\") " pod="calico-system/calico-kube-controllers-759d59f7d9-2dqb9" Apr 17 23:40:11.942966 systemd[1]: Created slice kubepods-besteffort-pod4b1dbce2_5f0d_4778_a7a7_f0ca0b4da675.slice - libcontainer container kubepods-besteffort-pod4b1dbce2_5f0d_4778_a7a7_f0ca0b4da675.slice. Apr 17 23:40:11.957621 systemd[1]: Created slice kubepods-besteffort-pod17cb0cd8_e168_42c7_8bf5_bf6f746d1982.slice - libcontainer container kubepods-besteffort-pod17cb0cd8_e168_42c7_8bf5_bf6f746d1982.slice. Apr 17 23:40:12.056136 containerd[1726]: time="2026-04-17T23:40:12.055514806Z" level=info msg="CreateContainer within sandbox \"f99de332067f93176d47ffbf6c6ba435be851c1fb99cf8eec512261b64577fa0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 17 23:40:12.056468 containerd[1726]: time="2026-04-17T23:40:12.056102612Z" level=error msg="Failed to destroy network for sandbox \"0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:40:12.061318 containerd[1726]: time="2026-04-17T23:40:12.056605817Z" level=error msg="encountered an error cleaning up failed sandbox \"0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:40:12.061318 containerd[1726]: time="2026-04-17T23:40:12.059697748Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9fs9x,Uid:a52a9606-2487-4d0a-8d3d-112a3887d0ee,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:40:12.071385 kubelet[3186]: E0417 23:40:12.071121 3186 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:40:12.071385 kubelet[3186]: E0417 23:40:12.071203 3186 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9fs9x" Apr 17 23:40:12.071385 kubelet[3186]: E0417 23:40:12.071247 3186 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9fs9x" Apr 17 23:40:12.071613 kubelet[3186]: E0417 23:40:12.071320 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9fs9x_calico-system(a52a9606-2487-4d0a-8d3d-112a3887d0ee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9fs9x_calico-system(a52a9606-2487-4d0a-8d3d-112a3887d0ee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9fs9x" podUID="a52a9606-2487-4d0a-8d3d-112a3887d0ee" Apr 17 23:40:12.127167 containerd[1726]: time="2026-04-17T23:40:12.127040728Z" level=info msg="CreateContainer within sandbox \"f99de332067f93176d47ffbf6c6ba435be851c1fb99cf8eec512261b64577fa0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a73085b174a779f2571185bf1cff650d25ceca4ed6fcf5fd131094b7c0b8a498\"" Apr 17 23:40:12.129300 containerd[1726]: time="2026-04-17T23:40:12.128966148Z" level=info msg="StartContainer for \"a73085b174a779f2571185bf1cff650d25ceca4ed6fcf5fd131094b7c0b8a498\"" Apr 17 23:40:12.157838 systemd[1]: Started cri-containerd-a73085b174a779f2571185bf1cff650d25ceca4ed6fcf5fd131094b7c0b8a498.scope - libcontainer container a73085b174a779f2571185bf1cff650d25ceca4ed6fcf5fd131094b7c0b8a498. Apr 17 23:40:12.160457 containerd[1726]: time="2026-04-17T23:40:12.160204548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-596447fcf-5kzpg,Uid:6881ecfb-ba07-4f07-9908-f7af5cb84913,Namespace:calico-system,Attempt:0,}" Apr 17 23:40:12.190963 containerd[1726]: time="2026-04-17T23:40:12.190908228Z" level=info msg="StartContainer for \"a73085b174a779f2571185bf1cff650d25ceca4ed6fcf5fd131094b7c0b8a498\" returns successfully" Apr 17 23:40:12.197431 containerd[1726]: time="2026-04-17T23:40:12.197077885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67d9f5f86b-j8d8z,Uid:bfb4688d-986d-4ce5-9339-fe3b9c2a1572,Namespace:calico-system,Attempt:0,}" Apr 17 23:40:12.206309 containerd[1726]: time="2026-04-17T23:40:12.205824065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6snhn,Uid:d20ead5d-6097-4ec2-95c7-cb2c778d9ef9,Namespace:kube-system,Attempt:0,}" Apr 17 23:40:12.212298 containerd[1726]: time="2026-04-17T23:40:12.211965321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sbk8d,Uid:95fefded-a482-4bce-a706-4f16bcb76d2b,Namespace:kube-system,Attempt:0,}" Apr 17 23:40:12.228947 containerd[1726]: time="2026-04-17T23:40:12.227223360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-lchr8,Uid:89036b56-0e46-4839-b92f-5b8cf483ee20,Namespace:calico-system,Attempt:0,}" Apr 17 23:40:12.253197 containerd[1726]: time="2026-04-17T23:40:12.253151097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-759d59f7d9-2dqb9,Uid:4b1dbce2-5f0d-4778-a7a7-f0ca0b4da675,Namespace:calico-system,Attempt:0,}" Apr 17 23:40:12.261899 containerd[1726]: time="2026-04-17T23:40:12.261858276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67d9f5f86b-vmsph,Uid:17cb0cd8-e168-42c7-8bf5-bf6f746d1982,Namespace:calico-system,Attempt:0,}" Apr 17 23:40:12.290830 containerd[1726]: time="2026-04-17T23:40:12.290769740Z" level=error msg="Failed to destroy network for sandbox \"3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:40:12.291153 containerd[1726]: time="2026-04-17T23:40:12.291094043Z" level=error msg="encountered an error cleaning up failed sandbox \"3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:40:12.291243 containerd[1726]: time="2026-04-17T23:40:12.291161543Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-596447fcf-5kzpg,Uid:6881ecfb-ba07-4f07-9908-f7af5cb84913,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:40:12.291511 kubelet[3186]: E0417 23:40:12.291464 3186 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:40:12.291600 kubelet[3186]: E0417 23:40:12.291533 3186 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-596447fcf-5kzpg" Apr 17 23:40:12.291600 kubelet[3186]: E0417 23:40:12.291565 3186 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-596447fcf-5kzpg" Apr 17 23:40:12.291722 kubelet[3186]: E0417 23:40:12.291630 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-596447fcf-5kzpg_calico-system(6881ecfb-ba07-4f07-9908-f7af5cb84913)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-596447fcf-5kzpg_calico-system(6881ecfb-ba07-4f07-9908-f7af5cb84913)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-596447fcf-5kzpg" podUID="6881ecfb-ba07-4f07-9908-f7af5cb84913" Apr 17 23:40:12.956627 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732-shm.mount: Deactivated successfully. Apr 17 23:40:13.005376 kubelet[3186]: I0417 23:40:13.005331 3186 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2" Apr 17 23:40:13.005926 containerd[1726]: time="2026-04-17T23:40:13.005832467Z" level=info msg="StopPodSandbox for \"3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2\"" Apr 17 23:40:13.006927 containerd[1726]: time="2026-04-17T23:40:13.006333271Z" level=info msg="Ensure that sandbox 3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2 in task-service has been cleanup successfully" Apr 17 23:40:13.011928 kubelet[3186]: I0417 23:40:13.011719 3186 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732" Apr 17 23:40:13.013110 containerd[1726]: time="2026-04-17T23:40:13.013075933Z" level=info msg="StopPodSandbox for \"0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732\"" Apr 17 23:40:13.013722 containerd[1726]: time="2026-04-17T23:40:13.013244134Z" level=info msg="Ensure that sandbox 0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732 in task-service has been cleanup successfully" Apr 17 23:40:13.052468 kubelet[3186]: I0417 23:40:13.048639 3186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-q4dh8" podStartSLOduration=15.113376327 podStartE2EDuration="48.048618257s" podCreationTimestamp="2026-04-17 23:39:25 +0000 UTC" firstStartedPulling="2026-04-17 23:39:26.114797697 +0000 UTC m=+24.496634057" lastFinishedPulling="2026-04-17 23:39:59.050039727 +0000 UTC m=+57.431875987" observedRunningTime="2026-04-17 23:40:13.038414564 +0000 UTC m=+71.420250824" watchObservedRunningTime="2026-04-17 23:40:13.048618257 +0000 UTC m=+71.430454517" Apr 17 23:40:13.063941 systemd-networkd[1356]: calia271cd3a4de: Link UP Apr 17 23:40:13.064282 systemd-networkd[1356]: calia271cd3a4de: Gained carrier Apr 17 23:40:13.104154 containerd[1726]: 2026-04-17 23:40:12.559 [ERROR][4147] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:40:13.104154 containerd[1726]: 2026-04-17 23:40:12.626 [INFO][4147] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--b8c45c9493-k8s-coredns--674b8bbfcf--6snhn-eth0 coredns-674b8bbfcf- kube-system d20ead5d-6097-4ec2-95c7-cb2c778d9ef9 952 0 2026-04-17 23:39:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-b8c45c9493 coredns-674b8bbfcf-6snhn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia271cd3a4de [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="75cd49c9f0999f4168f44d90ae3704f98b14c1b996a48f62f20c2e9c88dfcef1" Namespace="kube-system" Pod="coredns-674b8bbfcf-6snhn" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-coredns--674b8bbfcf--6snhn-" Apr 17 23:40:13.104154 containerd[1726]: 2026-04-17 23:40:12.626 [INFO][4147] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="75cd49c9f0999f4168f44d90ae3704f98b14c1b996a48f62f20c2e9c88dfcef1" Namespace="kube-system" Pod="coredns-674b8bbfcf-6snhn" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-coredns--674b8bbfcf--6snhn-eth0" Apr 17 23:40:13.104154 containerd[1726]: 2026-04-17 23:40:12.813 [INFO][4214] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="75cd49c9f0999f4168f44d90ae3704f98b14c1b996a48f62f20c2e9c88dfcef1" HandleID="k8s-pod-network.75cd49c9f0999f4168f44d90ae3704f98b14c1b996a48f62f20c2e9c88dfcef1" Workload="ci--4081.3.6--n--b8c45c9493-k8s-coredns--674b8bbfcf--6snhn-eth0" Apr 17 23:40:13.104154 containerd[1726]: 2026-04-17 23:40:12.832 [INFO][4214] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="75cd49c9f0999f4168f44d90ae3704f98b14c1b996a48f62f20c2e9c88dfcef1" HandleID="k8s-pod-network.75cd49c9f0999f4168f44d90ae3704f98b14c1b996a48f62f20c2e9c88dfcef1" Workload="ci--4081.3.6--n--b8c45c9493-k8s-coredns--674b8bbfcf--6snhn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000366460), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-b8c45c9493", "pod":"coredns-674b8bbfcf-6snhn", "timestamp":"2026-04-17 23:40:12.81339421 +0000 UTC"}, Hostname:"ci-4081.3.6-n-b8c45c9493", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003e1760)} Apr 17 23:40:13.104154 containerd[1726]: 2026-04-17 23:40:12.832 [INFO][4214] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:40:13.104154 containerd[1726]: 2026-04-17 23:40:12.832 [INFO][4214] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:40:13.104154 containerd[1726]: 2026-04-17 23:40:12.832 [INFO][4214] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-b8c45c9493' Apr 17 23:40:13.104154 containerd[1726]: 2026-04-17 23:40:12.836 [INFO][4214] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.75cd49c9f0999f4168f44d90ae3704f98b14c1b996a48f62f20c2e9c88dfcef1" host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.104154 containerd[1726]: 2026-04-17 23:40:12.867 [INFO][4214] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.104154 containerd[1726]: 2026-04-17 23:40:12.883 [INFO][4214] ipam/ipam.go 558: Ran out of existing affine blocks for host host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.104154 containerd[1726]: 2026-04-17 23:40:12.889 [INFO][4214] ipam/ipam.go 575: Tried all affine blocks. Looking for an affine block with space, or a new unclaimed block host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.104154 containerd[1726]: 2026-04-17 23:40:12.897 [INFO][4214] ipam/ipam_block_reader_writer.go 158: Found free block: 192.168.13.0/26 Apr 17 23:40:13.104154 containerd[1726]: 2026-04-17 23:40:12.897 [INFO][4214] ipam/ipam.go 588: Found unclaimed block in 7.264366ms host="ci-4081.3.6-n-b8c45c9493" subnet=192.168.13.0/26 Apr 17 23:40:13.104154 containerd[1726]: 2026-04-17 23:40:12.897 [INFO][4214] ipam/ipam_block_reader_writer.go 175: Trying to create affinity in pending state host="ci-4081.3.6-n-b8c45c9493" subnet=192.168.13.0/26 Apr 17 23:40:13.104154 containerd[1726]: 2026-04-17 23:40:12.905 [INFO][4214] ipam/ipam_block_reader_writer.go 205: Successfully created pending affinity for block host="ci-4081.3.6-n-b8c45c9493" subnet=192.168.13.0/26 Apr 17 23:40:13.104154 containerd[1726]: 2026-04-17 23:40:12.906 [INFO][4214] ipam/ipam.go 160: Attempting to load block cidr=192.168.13.0/26 host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.104154 containerd[1726]: 2026-04-17 23:40:12.907 [INFO][4214] ipam/ipam.go 165: The referenced block doesn't exist, trying to create it cidr=192.168.13.0/26 host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.104154 containerd[1726]: 2026-04-17 23:40:12.912 [INFO][4214] ipam/ipam.go 172: Wrote affinity as pending cidr=192.168.13.0/26 host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.104154 containerd[1726]: 2026-04-17 23:40:12.914 [INFO][4214] ipam/ipam.go 181: Attempting to claim the block cidr=192.168.13.0/26 host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.104154 containerd[1726]: 2026-04-17 23:40:12.914 [INFO][4214] ipam/ipam_block_reader_writer.go 226: Attempting to create a new block affinityType="host" host="ci-4081.3.6-n-b8c45c9493" subnet=192.168.13.0/26 Apr 17 23:40:13.104154 containerd[1726]: 2026-04-17 23:40:12.933 [INFO][4214] ipam/ipam_block_reader_writer.go 267: Successfully created block Apr 17 23:40:13.104154 containerd[1726]: 2026-04-17 23:40:12.933 [INFO][4214] ipam/ipam_block_reader_writer.go 283: Confirming affinity host="ci-4081.3.6-n-b8c45c9493" subnet=192.168.13.0/26 Apr 17 23:40:13.104154 containerd[1726]: 2026-04-17 23:40:12.950 [INFO][4214] ipam/ipam_block_reader_writer.go 298: Successfully confirmed affinity host="ci-4081.3.6-n-b8c45c9493" subnet=192.168.13.0/26 Apr 17 23:40:13.104154 containerd[1726]: 2026-04-17 23:40:12.951 [INFO][4214] ipam/ipam.go 623: Block '192.168.13.0/26' has 64 free ips which is more than 1 ips required. host="ci-4081.3.6-n-b8c45c9493" subnet=192.168.13.0/26 Apr 17 23:40:13.104154 containerd[1726]: 2026-04-17 23:40:12.951 [INFO][4214] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.13.0/26 handle="k8s-pod-network.75cd49c9f0999f4168f44d90ae3704f98b14c1b996a48f62f20c2e9c88dfcef1" host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.104154 containerd[1726]: 2026-04-17 23:40:12.959 [INFO][4214] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.75cd49c9f0999f4168f44d90ae3704f98b14c1b996a48f62f20c2e9c88dfcef1 Apr 17 23:40:13.107288 containerd[1726]: 2026-04-17 23:40:12.970 [INFO][4214] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.13.0/26 handle="k8s-pod-network.75cd49c9f0999f4168f44d90ae3704f98b14c1b996a48f62f20c2e9c88dfcef1" host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.107288 containerd[1726]: 2026-04-17 23:40:12.981 [INFO][4214] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.13.0/26] block=192.168.13.0/26 handle="k8s-pod-network.75cd49c9f0999f4168f44d90ae3704f98b14c1b996a48f62f20c2e9c88dfcef1" host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.107288 containerd[1726]: 2026-04-17 23:40:12.981 [INFO][4214] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.13.0/26] handle="k8s-pod-network.75cd49c9f0999f4168f44d90ae3704f98b14c1b996a48f62f20c2e9c88dfcef1" host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.107288 containerd[1726]: 2026-04-17 23:40:12.981 [INFO][4214] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:40:13.107288 containerd[1726]: 2026-04-17 23:40:12.981 [INFO][4214] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.13.0/26] IPv6=[] ContainerID="75cd49c9f0999f4168f44d90ae3704f98b14c1b996a48f62f20c2e9c88dfcef1" HandleID="k8s-pod-network.75cd49c9f0999f4168f44d90ae3704f98b14c1b996a48f62f20c2e9c88dfcef1" Workload="ci--4081.3.6--n--b8c45c9493-k8s-coredns--674b8bbfcf--6snhn-eth0" Apr 17 23:40:13.107288 containerd[1726]: 2026-04-17 23:40:12.987 [INFO][4147] cni-plugin/k8s.go 418: Populated endpoint ContainerID="75cd49c9f0999f4168f44d90ae3704f98b14c1b996a48f62f20c2e9c88dfcef1" Namespace="kube-system" Pod="coredns-674b8bbfcf-6snhn" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-coredns--674b8bbfcf--6snhn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--b8c45c9493-k8s-coredns--674b8bbfcf--6snhn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d20ead5d-6097-4ec2-95c7-cb2c778d9ef9", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-b8c45c9493", ContainerID:"", Pod:"coredns-674b8bbfcf-6snhn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.0/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia271cd3a4de", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:13.107288 containerd[1726]: 2026-04-17 23:40:12.988 [INFO][4147] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.0/32] ContainerID="75cd49c9f0999f4168f44d90ae3704f98b14c1b996a48f62f20c2e9c88dfcef1" Namespace="kube-system" Pod="coredns-674b8bbfcf-6snhn" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-coredns--674b8bbfcf--6snhn-eth0" Apr 17 23:40:13.107288 containerd[1726]: 2026-04-17 23:40:12.988 [INFO][4147] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia271cd3a4de ContainerID="75cd49c9f0999f4168f44d90ae3704f98b14c1b996a48f62f20c2e9c88dfcef1" Namespace="kube-system" Pod="coredns-674b8bbfcf-6snhn" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-coredns--674b8bbfcf--6snhn-eth0" Apr 17 23:40:13.107288 containerd[1726]: 2026-04-17 23:40:13.064 [INFO][4147] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="75cd49c9f0999f4168f44d90ae3704f98b14c1b996a48f62f20c2e9c88dfcef1" Namespace="kube-system" Pod="coredns-674b8bbfcf-6snhn" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-coredns--674b8bbfcf--6snhn-eth0" Apr 17 23:40:13.107750 containerd[1726]: 2026-04-17 23:40:13.064 [INFO][4147] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="75cd49c9f0999f4168f44d90ae3704f98b14c1b996a48f62f20c2e9c88dfcef1" Namespace="kube-system" Pod="coredns-674b8bbfcf-6snhn" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-coredns--674b8bbfcf--6snhn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--b8c45c9493-k8s-coredns--674b8bbfcf--6snhn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d20ead5d-6097-4ec2-95c7-cb2c778d9ef9", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-b8c45c9493", ContainerID:"75cd49c9f0999f4168f44d90ae3704f98b14c1b996a48f62f20c2e9c88dfcef1", Pod:"coredns-674b8bbfcf-6snhn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.0/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia271cd3a4de", MAC:"42:c0:60:bb:c0:64", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:13.107750 containerd[1726]: 2026-04-17 23:40:13.091 [INFO][4147] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="75cd49c9f0999f4168f44d90ae3704f98b14c1b996a48f62f20c2e9c88dfcef1" Namespace="kube-system" Pod="coredns-674b8bbfcf-6snhn" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-coredns--674b8bbfcf--6snhn-eth0" Apr 17 23:40:13.182316 containerd[1726]: time="2026-04-17T23:40:13.182005275Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:40:13.182920 containerd[1726]: time="2026-04-17T23:40:13.182161376Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:40:13.182920 containerd[1726]: time="2026-04-17T23:40:13.182181076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:40:13.182920 containerd[1726]: time="2026-04-17T23:40:13.182541380Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:40:13.225380 systemd-networkd[1356]: calid118559f5d0: Link UP Apr 17 23:40:13.227604 systemd-networkd[1356]: calid118559f5d0: Gained carrier Apr 17 23:40:13.246060 systemd[1]: Started cri-containerd-75cd49c9f0999f4168f44d90ae3704f98b14c1b996a48f62f20c2e9c88dfcef1.scope - libcontainer container 75cd49c9f0999f4168f44d90ae3704f98b14c1b996a48f62f20c2e9c88dfcef1. Apr 17 23:40:13.278207 containerd[1726]: 2026-04-17 23:40:12.632 [ERROR][4157] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:40:13.278207 containerd[1726]: 2026-04-17 23:40:12.684 [INFO][4157] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--b8c45c9493-k8s-coredns--674b8bbfcf--sbk8d-eth0 coredns-674b8bbfcf- kube-system 95fefded-a482-4bce-a706-4f16bcb76d2b 950 0 2026-04-17 23:39:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-b8c45c9493 coredns-674b8bbfcf-sbk8d eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid118559f5d0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8dda3fc9aa63c4d1638ad82c661dbccc0c7bc29119b7734d25ab7429b08579f5" Namespace="kube-system" Pod="coredns-674b8bbfcf-sbk8d" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-coredns--674b8bbfcf--sbk8d-" Apr 17 23:40:13.278207 containerd[1726]: 2026-04-17 23:40:12.684 [INFO][4157] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8dda3fc9aa63c4d1638ad82c661dbccc0c7bc29119b7734d25ab7429b08579f5" Namespace="kube-system" Pod="coredns-674b8bbfcf-sbk8d" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-coredns--674b8bbfcf--sbk8d-eth0" Apr 17 23:40:13.278207 containerd[1726]: 2026-04-17 23:40:12.835 [INFO][4232] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8dda3fc9aa63c4d1638ad82c661dbccc0c7bc29119b7734d25ab7429b08579f5" HandleID="k8s-pod-network.8dda3fc9aa63c4d1638ad82c661dbccc0c7bc29119b7734d25ab7429b08579f5" Workload="ci--4081.3.6--n--b8c45c9493-k8s-coredns--674b8bbfcf--sbk8d-eth0" Apr 17 23:40:13.278207 containerd[1726]: 2026-04-17 23:40:12.865 [INFO][4232] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8dda3fc9aa63c4d1638ad82c661dbccc0c7bc29119b7734d25ab7429b08579f5" HandleID="k8s-pod-network.8dda3fc9aa63c4d1638ad82c661dbccc0c7bc29119b7734d25ab7429b08579f5" Workload="ci--4081.3.6--n--b8c45c9493-k8s-coredns--674b8bbfcf--sbk8d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004ab6f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-b8c45c9493", "pod":"coredns-674b8bbfcf-sbk8d", "timestamp":"2026-04-17 23:40:12.83527541 +0000 UTC"}, Hostname:"ci-4081.3.6-n-b8c45c9493", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000188dc0)} Apr 17 23:40:13.278207 containerd[1726]: 2026-04-17 23:40:12.865 [INFO][4232] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:40:13.278207 containerd[1726]: 2026-04-17 23:40:12.981 [INFO][4232] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:40:13.278207 containerd[1726]: 2026-04-17 23:40:12.981 [INFO][4232] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-b8c45c9493' Apr 17 23:40:13.278207 containerd[1726]: 2026-04-17 23:40:12.985 [INFO][4232] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8dda3fc9aa63c4d1638ad82c661dbccc0c7bc29119b7734d25ab7429b08579f5" host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.278207 containerd[1726]: 2026-04-17 23:40:13.076 [INFO][4232] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.278207 containerd[1726]: 2026-04-17 23:40:13.102 [INFO][4232] ipam/ipam.go 526: Trying affinity for 192.168.13.0/26 host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.278207 containerd[1726]: 2026-04-17 23:40:13.110 [INFO][4232] ipam/ipam.go 160: Attempting to load block cidr=192.168.13.0/26 host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.278207 containerd[1726]: 2026-04-17 23:40:13.121 [INFO][4232] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.13.0/26 host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.278207 containerd[1726]: 2026-04-17 23:40:13.121 [INFO][4232] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.13.0/26 handle="k8s-pod-network.8dda3fc9aa63c4d1638ad82c661dbccc0c7bc29119b7734d25ab7429b08579f5" host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.278207 containerd[1726]: 2026-04-17 23:40:13.124 [INFO][4232] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8dda3fc9aa63c4d1638ad82c661dbccc0c7bc29119b7734d25ab7429b08579f5 Apr 17 23:40:13.278207 containerd[1726]: 2026-04-17 23:40:13.138 [INFO][4232] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.13.0/26 handle="k8s-pod-network.8dda3fc9aa63c4d1638ad82c661dbccc0c7bc29119b7734d25ab7429b08579f5" host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.278207 containerd[1726]: 2026-04-17 23:40:13.172 [INFO][4232] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.13.1/26] block=192.168.13.0/26 handle="k8s-pod-network.8dda3fc9aa63c4d1638ad82c661dbccc0c7bc29119b7734d25ab7429b08579f5" host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.278207 containerd[1726]: 2026-04-17 23:40:13.174 [INFO][4232] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.13.1/26] handle="k8s-pod-network.8dda3fc9aa63c4d1638ad82c661dbccc0c7bc29119b7734d25ab7429b08579f5" host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.278207 containerd[1726]: 2026-04-17 23:40:13.176 [INFO][4232] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:40:13.278207 containerd[1726]: 2026-04-17 23:40:13.177 [INFO][4232] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.13.1/26] IPv6=[] ContainerID="8dda3fc9aa63c4d1638ad82c661dbccc0c7bc29119b7734d25ab7429b08579f5" HandleID="k8s-pod-network.8dda3fc9aa63c4d1638ad82c661dbccc0c7bc29119b7734d25ab7429b08579f5" Workload="ci--4081.3.6--n--b8c45c9493-k8s-coredns--674b8bbfcf--sbk8d-eth0" Apr 17 23:40:13.280178 containerd[1726]: 2026-04-17 23:40:13.210 [INFO][4157] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8dda3fc9aa63c4d1638ad82c661dbccc0c7bc29119b7734d25ab7429b08579f5" Namespace="kube-system" Pod="coredns-674b8bbfcf-sbk8d" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-coredns--674b8bbfcf--sbk8d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--b8c45c9493-k8s-coredns--674b8bbfcf--sbk8d-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"95fefded-a482-4bce-a706-4f16bcb76d2b", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-b8c45c9493", ContainerID:"", Pod:"coredns-674b8bbfcf-sbk8d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid118559f5d0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:13.280178 containerd[1726]: 2026-04-17 23:40:13.211 [INFO][4157] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.1/32] ContainerID="8dda3fc9aa63c4d1638ad82c661dbccc0c7bc29119b7734d25ab7429b08579f5" Namespace="kube-system" Pod="coredns-674b8bbfcf-sbk8d" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-coredns--674b8bbfcf--sbk8d-eth0" Apr 17 23:40:13.280178 containerd[1726]: 2026-04-17 23:40:13.211 [INFO][4157] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid118559f5d0 ContainerID="8dda3fc9aa63c4d1638ad82c661dbccc0c7bc29119b7734d25ab7429b08579f5" Namespace="kube-system" Pod="coredns-674b8bbfcf-sbk8d" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-coredns--674b8bbfcf--sbk8d-eth0" Apr 17 23:40:13.280178 containerd[1726]: 2026-04-17 23:40:13.225 [INFO][4157] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8dda3fc9aa63c4d1638ad82c661dbccc0c7bc29119b7734d25ab7429b08579f5" Namespace="kube-system" Pod="coredns-674b8bbfcf-sbk8d" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-coredns--674b8bbfcf--sbk8d-eth0" Apr 17 23:40:13.280178 containerd[1726]: 2026-04-17 23:40:13.227 [INFO][4157] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8dda3fc9aa63c4d1638ad82c661dbccc0c7bc29119b7734d25ab7429b08579f5" Namespace="kube-system" Pod="coredns-674b8bbfcf-sbk8d" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-coredns--674b8bbfcf--sbk8d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--b8c45c9493-k8s-coredns--674b8bbfcf--sbk8d-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"95fefded-a482-4bce-a706-4f16bcb76d2b", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-b8c45c9493", ContainerID:"8dda3fc9aa63c4d1638ad82c661dbccc0c7bc29119b7734d25ab7429b08579f5", Pod:"coredns-674b8bbfcf-sbk8d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid118559f5d0", MAC:"86:c5:48:52:1a:aa", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:13.280178 containerd[1726]: 2026-04-17 23:40:13.267 [INFO][4157] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8dda3fc9aa63c4d1638ad82c661dbccc0c7bc29119b7734d25ab7429b08579f5" Namespace="kube-system" Pod="coredns-674b8bbfcf-sbk8d" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-coredns--674b8bbfcf--sbk8d-eth0" Apr 17 23:40:13.334606 systemd-networkd[1356]: cali77f5437964d: Link UP Apr 17 23:40:13.337339 systemd-networkd[1356]: cali77f5437964d: Gained carrier Apr 17 23:40:13.390211 containerd[1726]: 2026-04-17 23:40:12.568 [ERROR][4136] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:40:13.390211 containerd[1726]: 2026-04-17 23:40:12.624 [INFO][4136] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--b8c45c9493-k8s-calico--apiserver--67d9f5f86b--j8d8z-eth0 calico-apiserver-67d9f5f86b- calico-system bfb4688d-986d-4ce5-9339-fe3b9c2a1572 949 0 2026-04-17 23:39:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67d9f5f86b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-b8c45c9493 calico-apiserver-67d9f5f86b-j8d8z eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali77f5437964d [] [] }} ContainerID="d40f48d64acdd2b7cfdf2404c1f61687b41e6215b8548d7e14c8ce501dc07ff7" Namespace="calico-system" Pod="calico-apiserver-67d9f5f86b-j8d8z" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-calico--apiserver--67d9f5f86b--j8d8z-" Apr 17 23:40:13.390211 containerd[1726]: 2026-04-17 23:40:12.624 [INFO][4136] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d40f48d64acdd2b7cfdf2404c1f61687b41e6215b8548d7e14c8ce501dc07ff7" Namespace="calico-system" Pod="calico-apiserver-67d9f5f86b-j8d8z" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-calico--apiserver--67d9f5f86b--j8d8z-eth0" Apr 17 23:40:13.390211 containerd[1726]: 2026-04-17 23:40:12.821 [INFO][4213] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d40f48d64acdd2b7cfdf2404c1f61687b41e6215b8548d7e14c8ce501dc07ff7" HandleID="k8s-pod-network.d40f48d64acdd2b7cfdf2404c1f61687b41e6215b8548d7e14c8ce501dc07ff7" Workload="ci--4081.3.6--n--b8c45c9493-k8s-calico--apiserver--67d9f5f86b--j8d8z-eth0" Apr 17 23:40:13.390211 containerd[1726]: 2026-04-17 23:40:12.867 [INFO][4213] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d40f48d64acdd2b7cfdf2404c1f61687b41e6215b8548d7e14c8ce501dc07ff7" HandleID="k8s-pod-network.d40f48d64acdd2b7cfdf2404c1f61687b41e6215b8548d7e14c8ce501dc07ff7" Workload="ci--4081.3.6--n--b8c45c9493-k8s-calico--apiserver--67d9f5f86b--j8d8z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003d0d30), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-b8c45c9493", "pod":"calico-apiserver-67d9f5f86b-j8d8z", "timestamp":"2026-04-17 23:40:12.821596685 +0000 UTC"}, Hostname:"ci-4081.3.6-n-b8c45c9493", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000188c60)} Apr 17 23:40:13.390211 containerd[1726]: 2026-04-17 23:40:12.867 [INFO][4213] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:40:13.390211 containerd[1726]: 2026-04-17 23:40:13.174 [INFO][4213] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:40:13.390211 containerd[1726]: 2026-04-17 23:40:13.175 [INFO][4213] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-b8c45c9493' Apr 17 23:40:13.390211 containerd[1726]: 2026-04-17 23:40:13.180 [INFO][4213] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d40f48d64acdd2b7cfdf2404c1f61687b41e6215b8548d7e14c8ce501dc07ff7" host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.390211 containerd[1726]: 2026-04-17 23:40:13.231 [INFO][4213] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.390211 containerd[1726]: 2026-04-17 23:40:13.260 [INFO][4213] ipam/ipam.go 526: Trying affinity for 192.168.13.0/26 host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.390211 containerd[1726]: 2026-04-17 23:40:13.267 [INFO][4213] ipam/ipam.go 160: Attempting to load block cidr=192.168.13.0/26 host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.390211 containerd[1726]: 2026-04-17 23:40:13.275 [INFO][4213] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.13.0/26 host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.390211 containerd[1726]: 2026-04-17 23:40:13.275 [INFO][4213] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.13.0/26 handle="k8s-pod-network.d40f48d64acdd2b7cfdf2404c1f61687b41e6215b8548d7e14c8ce501dc07ff7" host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.390211 containerd[1726]: 2026-04-17 23:40:13.291 [INFO][4213] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d40f48d64acdd2b7cfdf2404c1f61687b41e6215b8548d7e14c8ce501dc07ff7 Apr 17 23:40:13.390211 containerd[1726]: 2026-04-17 23:40:13.299 [INFO][4213] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.13.0/26 handle="k8s-pod-network.d40f48d64acdd2b7cfdf2404c1f61687b41e6215b8548d7e14c8ce501dc07ff7" host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.390211 containerd[1726]: 2026-04-17 23:40:13.316 [INFO][4213] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.13.2/26] block=192.168.13.0/26 handle="k8s-pod-network.d40f48d64acdd2b7cfdf2404c1f61687b41e6215b8548d7e14c8ce501dc07ff7" host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.390211 containerd[1726]: 2026-04-17 23:40:13.316 [INFO][4213] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.13.2/26] handle="k8s-pod-network.d40f48d64acdd2b7cfdf2404c1f61687b41e6215b8548d7e14c8ce501dc07ff7" host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.390211 containerd[1726]: 2026-04-17 23:40:13.316 [INFO][4213] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:40:13.390211 containerd[1726]: 2026-04-17 23:40:13.316 [INFO][4213] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.13.2/26] IPv6=[] ContainerID="d40f48d64acdd2b7cfdf2404c1f61687b41e6215b8548d7e14c8ce501dc07ff7" HandleID="k8s-pod-network.d40f48d64acdd2b7cfdf2404c1f61687b41e6215b8548d7e14c8ce501dc07ff7" Workload="ci--4081.3.6--n--b8c45c9493-k8s-calico--apiserver--67d9f5f86b--j8d8z-eth0" Apr 17 23:40:13.391724 containerd[1726]: 2026-04-17 23:40:13.322 [INFO][4136] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d40f48d64acdd2b7cfdf2404c1f61687b41e6215b8548d7e14c8ce501dc07ff7" Namespace="calico-system" Pod="calico-apiserver-67d9f5f86b-j8d8z" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-calico--apiserver--67d9f5f86b--j8d8z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--b8c45c9493-k8s-calico--apiserver--67d9f5f86b--j8d8z-eth0", GenerateName:"calico-apiserver-67d9f5f86b-", Namespace:"calico-system", SelfLink:"", UID:"bfb4688d-986d-4ce5-9339-fe3b9c2a1572", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67d9f5f86b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-b8c45c9493", ContainerID:"", Pod:"calico-apiserver-67d9f5f86b-j8d8z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali77f5437964d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:13.391724 containerd[1726]: 2026-04-17 23:40:13.323 [INFO][4136] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.2/32] ContainerID="d40f48d64acdd2b7cfdf2404c1f61687b41e6215b8548d7e14c8ce501dc07ff7" Namespace="calico-system" Pod="calico-apiserver-67d9f5f86b-j8d8z" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-calico--apiserver--67d9f5f86b--j8d8z-eth0" Apr 17 23:40:13.391724 containerd[1726]: 2026-04-17 23:40:13.323 [INFO][4136] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali77f5437964d ContainerID="d40f48d64acdd2b7cfdf2404c1f61687b41e6215b8548d7e14c8ce501dc07ff7" Namespace="calico-system" Pod="calico-apiserver-67d9f5f86b-j8d8z" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-calico--apiserver--67d9f5f86b--j8d8z-eth0" Apr 17 23:40:13.391724 containerd[1726]: 2026-04-17 23:40:13.355 [INFO][4136] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d40f48d64acdd2b7cfdf2404c1f61687b41e6215b8548d7e14c8ce501dc07ff7" Namespace="calico-system" Pod="calico-apiserver-67d9f5f86b-j8d8z" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-calico--apiserver--67d9f5f86b--j8d8z-eth0" Apr 17 23:40:13.391724 containerd[1726]: 2026-04-17 23:40:13.356 [INFO][4136] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d40f48d64acdd2b7cfdf2404c1f61687b41e6215b8548d7e14c8ce501dc07ff7" Namespace="calico-system" Pod="calico-apiserver-67d9f5f86b-j8d8z" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-calico--apiserver--67d9f5f86b--j8d8z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--b8c45c9493-k8s-calico--apiserver--67d9f5f86b--j8d8z-eth0", GenerateName:"calico-apiserver-67d9f5f86b-", Namespace:"calico-system", SelfLink:"", UID:"bfb4688d-986d-4ce5-9339-fe3b9c2a1572", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67d9f5f86b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-b8c45c9493", ContainerID:"d40f48d64acdd2b7cfdf2404c1f61687b41e6215b8548d7e14c8ce501dc07ff7", Pod:"calico-apiserver-67d9f5f86b-j8d8z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali77f5437964d", MAC:"96:ae:1e:c1:64:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:13.391724 containerd[1726]: 2026-04-17 23:40:13.387 [INFO][4136] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d40f48d64acdd2b7cfdf2404c1f61687b41e6215b8548d7e14c8ce501dc07ff7" Namespace="calico-system" Pod="calico-apiserver-67d9f5f86b-j8d8z" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-calico--apiserver--67d9f5f86b--j8d8z-eth0" Apr 17 23:40:13.397774 containerd[1726]: time="2026-04-17T23:40:13.397396841Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:40:13.398704 containerd[1726]: time="2026-04-17T23:40:13.398557751Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:40:13.398704 containerd[1726]: time="2026-04-17T23:40:13.398594052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:40:13.399968 containerd[1726]: time="2026-04-17T23:40:13.399018456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:40:13.403381 containerd[1726]: time="2026-04-17T23:40:13.403350695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6snhn,Uid:d20ead5d-6097-4ec2-95c7-cb2c778d9ef9,Namespace:kube-system,Attempt:0,} returns sandbox id \"75cd49c9f0999f4168f44d90ae3704f98b14c1b996a48f62f20c2e9c88dfcef1\"" Apr 17 23:40:13.446872 systemd[1]: Started cri-containerd-8dda3fc9aa63c4d1638ad82c661dbccc0c7bc29119b7734d25ab7429b08579f5.scope - libcontainer container 8dda3fc9aa63c4d1638ad82c661dbccc0c7bc29119b7734d25ab7429b08579f5. Apr 17 23:40:13.459196 containerd[1726]: time="2026-04-17T23:40:13.458981303Z" level=info msg="CreateContainer within sandbox \"75cd49c9f0999f4168f44d90ae3704f98b14c1b996a48f62f20c2e9c88dfcef1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:40:13.480389 systemd-networkd[1356]: cali752fbce792f: Link UP Apr 17 23:40:13.485530 systemd-networkd[1356]: cali752fbce792f: Gained carrier Apr 17 23:40:13.523842 containerd[1726]: time="2026-04-17T23:40:13.523164089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:40:13.523842 containerd[1726]: time="2026-04-17T23:40:13.523247090Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:40:13.523842 containerd[1726]: time="2026-04-17T23:40:13.523273090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:40:13.523842 containerd[1726]: time="2026-04-17T23:40:13.523395391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:40:13.530493 containerd[1726]: time="2026-04-17T23:40:13.530006151Z" level=info msg="CreateContainer within sandbox \"75cd49c9f0999f4168f44d90ae3704f98b14c1b996a48f62f20c2e9c88dfcef1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bc3dcf6e9db4b50211e9e044dcba1de813edae9f88d229ee181ec62ff629744f\"" Apr 17 23:40:13.531941 containerd[1726]: time="2026-04-17T23:40:13.531911669Z" level=info msg="StartContainer for \"bc3dcf6e9db4b50211e9e044dcba1de813edae9f88d229ee181ec62ff629744f\"" Apr 17 23:40:13.532865 containerd[1726]: 2026-04-17 23:40:12.596 [ERROR][4167] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:40:13.532865 containerd[1726]: 2026-04-17 23:40:12.645 [INFO][4167] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--b8c45c9493-k8s-goldmane--5b85766d88--lchr8-eth0 goldmane-5b85766d88- calico-system 89036b56-0e46-4839-b92f-5b8cf483ee20 953 0 2026-04-17 23:39:24 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-b8c45c9493 goldmane-5b85766d88-lchr8 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali752fbce792f [] [] }} ContainerID="9defa040c174abad9415bb029101ad625b0dda37facbfbefc8873de49fefbeea" Namespace="calico-system" Pod="goldmane-5b85766d88-lchr8" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-goldmane--5b85766d88--lchr8-" Apr 17 23:40:13.532865 containerd[1726]: 2026-04-17 23:40:12.645 [INFO][4167] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9defa040c174abad9415bb029101ad625b0dda37facbfbefc8873de49fefbeea" Namespace="calico-system" Pod="goldmane-5b85766d88-lchr8" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-goldmane--5b85766d88--lchr8-eth0" Apr 17 23:40:13.532865 containerd[1726]: 2026-04-17 23:40:12.841 [INFO][4222] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9defa040c174abad9415bb029101ad625b0dda37facbfbefc8873de49fefbeea" HandleID="k8s-pod-network.9defa040c174abad9415bb029101ad625b0dda37facbfbefc8873de49fefbeea" Workload="ci--4081.3.6--n--b8c45c9493-k8s-goldmane--5b85766d88--lchr8-eth0" Apr 17 23:40:13.532865 containerd[1726]: 2026-04-17 23:40:12.869 [INFO][4222] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9defa040c174abad9415bb029101ad625b0dda37facbfbefc8873de49fefbeea" HandleID="k8s-pod-network.9defa040c174abad9415bb029101ad625b0dda37facbfbefc8873de49fefbeea" Workload="ci--4081.3.6--n--b8c45c9493-k8s-goldmane--5b85766d88--lchr8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b3aa0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-b8c45c9493", "pod":"goldmane-5b85766d88-lchr8", "timestamp":"2026-04-17 23:40:12.841726769 +0000 UTC"}, Hostname:"ci-4081.3.6-n-b8c45c9493", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000248000)} Apr 17 23:40:13.532865 containerd[1726]: 2026-04-17 23:40:12.869 [INFO][4222] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:40:13.532865 containerd[1726]: 2026-04-17 23:40:13.316 [INFO][4222] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:40:13.532865 containerd[1726]: 2026-04-17 23:40:13.316 [INFO][4222] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-b8c45c9493' Apr 17 23:40:13.532865 containerd[1726]: 2026-04-17 23:40:13.332 [INFO][4222] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9defa040c174abad9415bb029101ad625b0dda37facbfbefc8873de49fefbeea" host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.532865 containerd[1726]: 2026-04-17 23:40:13.353 [INFO][4222] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.532865 containerd[1726]: 2026-04-17 23:40:13.386 [INFO][4222] ipam/ipam.go 526: Trying affinity for 192.168.13.0/26 host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.532865 containerd[1726]: 2026-04-17 23:40:13.399 [INFO][4222] ipam/ipam.go 160: Attempting to load block cidr=192.168.13.0/26 host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.532865 containerd[1726]: 2026-04-17 23:40:13.406 [INFO][4222] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.13.0/26 host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.532865 containerd[1726]: 2026-04-17 23:40:13.408 [INFO][4222] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.13.0/26 handle="k8s-pod-network.9defa040c174abad9415bb029101ad625b0dda37facbfbefc8873de49fefbeea" host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.532865 containerd[1726]: 2026-04-17 23:40:13.413 [INFO][4222] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9defa040c174abad9415bb029101ad625b0dda37facbfbefc8873de49fefbeea Apr 17 23:40:13.532865 containerd[1726]: 2026-04-17 23:40:13.430 [INFO][4222] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.13.0/26 handle="k8s-pod-network.9defa040c174abad9415bb029101ad625b0dda37facbfbefc8873de49fefbeea" host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.532865 containerd[1726]: 2026-04-17 23:40:13.449 [INFO][4222] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.13.4/26] block=192.168.13.0/26 handle="k8s-pod-network.9defa040c174abad9415bb029101ad625b0dda37facbfbefc8873de49fefbeea" host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.532865 containerd[1726]: 2026-04-17 23:40:13.452 [INFO][4222] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.13.4/26] handle="k8s-pod-network.9defa040c174abad9415bb029101ad625b0dda37facbfbefc8873de49fefbeea" host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.532865 containerd[1726]: 2026-04-17 23:40:13.454 [INFO][4222] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:40:13.532865 containerd[1726]: 2026-04-17 23:40:13.454 [INFO][4222] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.13.4/26] IPv6=[] ContainerID="9defa040c174abad9415bb029101ad625b0dda37facbfbefc8873de49fefbeea" HandleID="k8s-pod-network.9defa040c174abad9415bb029101ad625b0dda37facbfbefc8873de49fefbeea" Workload="ci--4081.3.6--n--b8c45c9493-k8s-goldmane--5b85766d88--lchr8-eth0" Apr 17 23:40:13.533759 containerd[1726]: 2026-04-17 23:40:13.460 [INFO][4167] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9defa040c174abad9415bb029101ad625b0dda37facbfbefc8873de49fefbeea" Namespace="calico-system" Pod="goldmane-5b85766d88-lchr8" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-goldmane--5b85766d88--lchr8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--b8c45c9493-k8s-goldmane--5b85766d88--lchr8-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"89036b56-0e46-4839-b92f-5b8cf483ee20", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-b8c45c9493", ContainerID:"", Pod:"goldmane-5b85766d88-lchr8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.13.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali752fbce792f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:13.533759 containerd[1726]: 2026-04-17 23:40:13.461 [INFO][4167] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.4/32] ContainerID="9defa040c174abad9415bb029101ad625b0dda37facbfbefc8873de49fefbeea" Namespace="calico-system" Pod="goldmane-5b85766d88-lchr8" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-goldmane--5b85766d88--lchr8-eth0" Apr 17 23:40:13.533759 containerd[1726]: 2026-04-17 23:40:13.461 [INFO][4167] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali752fbce792f ContainerID="9defa040c174abad9415bb029101ad625b0dda37facbfbefc8873de49fefbeea" Namespace="calico-system" Pod="goldmane-5b85766d88-lchr8" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-goldmane--5b85766d88--lchr8-eth0" Apr 17 23:40:13.533759 containerd[1726]: 2026-04-17 23:40:13.489 [INFO][4167] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9defa040c174abad9415bb029101ad625b0dda37facbfbefc8873de49fefbeea" Namespace="calico-system" Pod="goldmane-5b85766d88-lchr8" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-goldmane--5b85766d88--lchr8-eth0" Apr 17 23:40:13.533759 containerd[1726]: 2026-04-17 23:40:13.490 [INFO][4167] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9defa040c174abad9415bb029101ad625b0dda37facbfbefc8873de49fefbeea" Namespace="calico-system" Pod="goldmane-5b85766d88-lchr8" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-goldmane--5b85766d88--lchr8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--b8c45c9493-k8s-goldmane--5b85766d88--lchr8-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"89036b56-0e46-4839-b92f-5b8cf483ee20", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-b8c45c9493", ContainerID:"9defa040c174abad9415bb029101ad625b0dda37facbfbefc8873de49fefbeea", Pod:"goldmane-5b85766d88-lchr8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.13.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali752fbce792f", MAC:"6e:5f:fa:55:2c:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:13.533759 containerd[1726]: 2026-04-17 23:40:13.524 [INFO][4167] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9defa040c174abad9415bb029101ad625b0dda37facbfbefc8873de49fefbeea" Namespace="calico-system" Pod="goldmane-5b85766d88-lchr8" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-goldmane--5b85766d88--lchr8-eth0" Apr 17 23:40:13.606711 systemd-networkd[1356]: calie13d7c28cd5: Link UP Apr 17 23:40:13.606994 systemd-networkd[1356]: calie13d7c28cd5: Gained carrier Apr 17 23:40:13.609518 systemd[1]: Started cri-containerd-d40f48d64acdd2b7cfdf2404c1f61687b41e6215b8548d7e14c8ce501dc07ff7.scope - libcontainer container d40f48d64acdd2b7cfdf2404c1f61687b41e6215b8548d7e14c8ce501dc07ff7. Apr 17 23:40:13.630642 containerd[1726]: time="2026-04-17T23:40:13.630511469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sbk8d,Uid:95fefded-a482-4bce-a706-4f16bcb76d2b,Namespace:kube-system,Attempt:0,} returns sandbox id \"8dda3fc9aa63c4d1638ad82c661dbccc0c7bc29119b7734d25ab7429b08579f5\"" Apr 17 23:40:13.646023 containerd[1726]: time="2026-04-17T23:40:13.645929509Z" level=info msg="CreateContainer within sandbox \"8dda3fc9aa63c4d1638ad82c661dbccc0c7bc29119b7734d25ab7429b08579f5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:40:13.666720 containerd[1726]: 2026-04-17 23:40:12.696 [ERROR][4194] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:40:13.666720 containerd[1726]: 2026-04-17 23:40:12.745 [INFO][4194] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--b8c45c9493-k8s-calico--apiserver--67d9f5f86b--vmsph-eth0 calico-apiserver-67d9f5f86b- calico-system 17cb0cd8-e168-42c7-8bf5-bf6f746d1982 957 0 2026-04-17 23:39:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67d9f5f86b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-b8c45c9493 calico-apiserver-67d9f5f86b-vmsph eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calie13d7c28cd5 [] [] }} ContainerID="f1dadf4913d10bc7f14a79dc4e9d9ed372a5e480828a9bd38adabbcab7d97ec3" Namespace="calico-system" Pod="calico-apiserver-67d9f5f86b-vmsph" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-calico--apiserver--67d9f5f86b--vmsph-" Apr 17 23:40:13.666720 containerd[1726]: 2026-04-17 23:40:12.745 [INFO][4194] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f1dadf4913d10bc7f14a79dc4e9d9ed372a5e480828a9bd38adabbcab7d97ec3" Namespace="calico-system" Pod="calico-apiserver-67d9f5f86b-vmsph" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-calico--apiserver--67d9f5f86b--vmsph-eth0" Apr 17 23:40:13.666720 containerd[1726]: 2026-04-17 23:40:12.875 [INFO][4244] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f1dadf4913d10bc7f14a79dc4e9d9ed372a5e480828a9bd38adabbcab7d97ec3" HandleID="k8s-pod-network.f1dadf4913d10bc7f14a79dc4e9d9ed372a5e480828a9bd38adabbcab7d97ec3" Workload="ci--4081.3.6--n--b8c45c9493-k8s-calico--apiserver--67d9f5f86b--vmsph-eth0" Apr 17 23:40:13.666720 containerd[1726]: 2026-04-17 23:40:12.889 [INFO][4244] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f1dadf4913d10bc7f14a79dc4e9d9ed372a5e480828a9bd38adabbcab7d97ec3" HandleID="k8s-pod-network.f1dadf4913d10bc7f14a79dc4e9d9ed372a5e480828a9bd38adabbcab7d97ec3" Workload="ci--4081.3.6--n--b8c45c9493-k8s-calico--apiserver--67d9f5f86b--vmsph-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ef70), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-b8c45c9493", "pod":"calico-apiserver-67d9f5f86b-vmsph", "timestamp":"2026-04-17 23:40:12.875588878 +0000 UTC"}, Hostname:"ci-4081.3.6-n-b8c45c9493", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00065e160)} Apr 17 23:40:13.666720 containerd[1726]: 2026-04-17 23:40:12.889 [INFO][4244] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:40:13.666720 containerd[1726]: 2026-04-17 23:40:13.452 [INFO][4244] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:40:13.666720 containerd[1726]: 2026-04-17 23:40:13.452 [INFO][4244] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-b8c45c9493' Apr 17 23:40:13.666720 containerd[1726]: 2026-04-17 23:40:13.457 [INFO][4244] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f1dadf4913d10bc7f14a79dc4e9d9ed372a5e480828a9bd38adabbcab7d97ec3" host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.666720 containerd[1726]: 2026-04-17 23:40:13.468 [INFO][4244] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.666720 containerd[1726]: 2026-04-17 23:40:13.480 [INFO][4244] ipam/ipam.go 526: Trying affinity for 192.168.13.0/26 host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.666720 containerd[1726]: 2026-04-17 23:40:13.488 [INFO][4244] ipam/ipam.go 160: Attempting to load block cidr=192.168.13.0/26 host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.666720 containerd[1726]: 2026-04-17 23:40:13.499 [INFO][4244] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.13.0/26 host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.666720 containerd[1726]: 2026-04-17 23:40:13.500 [INFO][4244] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.13.0/26 handle="k8s-pod-network.f1dadf4913d10bc7f14a79dc4e9d9ed372a5e480828a9bd38adabbcab7d97ec3" host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.666720 containerd[1726]: 2026-04-17 23:40:13.505 [INFO][4244] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f1dadf4913d10bc7f14a79dc4e9d9ed372a5e480828a9bd38adabbcab7d97ec3 Apr 17 23:40:13.666720 containerd[1726]: 2026-04-17 23:40:13.528 [INFO][4244] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.13.0/26 handle="k8s-pod-network.f1dadf4913d10bc7f14a79dc4e9d9ed372a5e480828a9bd38adabbcab7d97ec3" host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.666720 containerd[1726]: 2026-04-17 23:40:13.557 [INFO][4244] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.13.5/26] block=192.168.13.0/26 handle="k8s-pod-network.f1dadf4913d10bc7f14a79dc4e9d9ed372a5e480828a9bd38adabbcab7d97ec3" host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.666720 containerd[1726]: 2026-04-17 23:40:13.557 [INFO][4244] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.13.5/26] handle="k8s-pod-network.f1dadf4913d10bc7f14a79dc4e9d9ed372a5e480828a9bd38adabbcab7d97ec3" host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.666720 containerd[1726]: 2026-04-17 23:40:13.557 [INFO][4244] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:40:13.666720 containerd[1726]: 2026-04-17 23:40:13.557 [INFO][4244] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.13.5/26] IPv6=[] ContainerID="f1dadf4913d10bc7f14a79dc4e9d9ed372a5e480828a9bd38adabbcab7d97ec3" HandleID="k8s-pod-network.f1dadf4913d10bc7f14a79dc4e9d9ed372a5e480828a9bd38adabbcab7d97ec3" Workload="ci--4081.3.6--n--b8c45c9493-k8s-calico--apiserver--67d9f5f86b--vmsph-eth0" Apr 17 23:40:13.671057 containerd[1726]: 2026-04-17 23:40:13.564 [INFO][4194] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f1dadf4913d10bc7f14a79dc4e9d9ed372a5e480828a9bd38adabbcab7d97ec3" Namespace="calico-system" Pod="calico-apiserver-67d9f5f86b-vmsph" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-calico--apiserver--67d9f5f86b--vmsph-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--b8c45c9493-k8s-calico--apiserver--67d9f5f86b--vmsph-eth0", GenerateName:"calico-apiserver-67d9f5f86b-", Namespace:"calico-system", SelfLink:"", UID:"17cb0cd8-e168-42c7-8bf5-bf6f746d1982", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67d9f5f86b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-b8c45c9493", ContainerID:"", Pod:"calico-apiserver-67d9f5f86b-vmsph", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie13d7c28cd5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:13.671057 containerd[1726]: 2026-04-17 23:40:13.565 [INFO][4194] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.5/32] ContainerID="f1dadf4913d10bc7f14a79dc4e9d9ed372a5e480828a9bd38adabbcab7d97ec3" Namespace="calico-system" Pod="calico-apiserver-67d9f5f86b-vmsph" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-calico--apiserver--67d9f5f86b--vmsph-eth0" Apr 17 23:40:13.671057 containerd[1726]: 2026-04-17 23:40:13.565 [INFO][4194] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie13d7c28cd5 ContainerID="f1dadf4913d10bc7f14a79dc4e9d9ed372a5e480828a9bd38adabbcab7d97ec3" Namespace="calico-system" Pod="calico-apiserver-67d9f5f86b-vmsph" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-calico--apiserver--67d9f5f86b--vmsph-eth0" Apr 17 23:40:13.671057 containerd[1726]: 2026-04-17 23:40:13.608 [INFO][4194] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f1dadf4913d10bc7f14a79dc4e9d9ed372a5e480828a9bd38adabbcab7d97ec3" Namespace="calico-system" Pod="calico-apiserver-67d9f5f86b-vmsph" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-calico--apiserver--67d9f5f86b--vmsph-eth0" Apr 17 23:40:13.671057 containerd[1726]: 2026-04-17 23:40:13.611 [INFO][4194] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f1dadf4913d10bc7f14a79dc4e9d9ed372a5e480828a9bd38adabbcab7d97ec3" Namespace="calico-system" Pod="calico-apiserver-67d9f5f86b-vmsph" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-calico--apiserver--67d9f5f86b--vmsph-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--b8c45c9493-k8s-calico--apiserver--67d9f5f86b--vmsph-eth0", GenerateName:"calico-apiserver-67d9f5f86b-", Namespace:"calico-system", SelfLink:"", UID:"17cb0cd8-e168-42c7-8bf5-bf6f746d1982", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67d9f5f86b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-b8c45c9493", ContainerID:"f1dadf4913d10bc7f14a79dc4e9d9ed372a5e480828a9bd38adabbcab7d97ec3", Pod:"calico-apiserver-67d9f5f86b-vmsph", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie13d7c28cd5", MAC:"ce:51:83:d8:2f:94", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:13.671057 containerd[1726]: 2026-04-17 23:40:13.647 [INFO][4194] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f1dadf4913d10bc7f14a79dc4e9d9ed372a5e480828a9bd38adabbcab7d97ec3" Namespace="calico-system" Pod="calico-apiserver-67d9f5f86b-vmsph" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-calico--apiserver--67d9f5f86b--vmsph-eth0" Apr 17 23:40:13.668903 systemd[1]: Started cri-containerd-bc3dcf6e9db4b50211e9e044dcba1de813edae9f88d229ee181ec62ff629744f.scope - libcontainer container bc3dcf6e9db4b50211e9e044dcba1de813edae9f88d229ee181ec62ff629744f. Apr 17 23:40:13.676954 containerd[1726]: time="2026-04-17T23:40:13.675890383Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:40:13.676954 containerd[1726]: time="2026-04-17T23:40:13.675954383Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:40:13.676954 containerd[1726]: time="2026-04-17T23:40:13.675974584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:40:13.676954 containerd[1726]: time="2026-04-17T23:40:13.676065184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:40:13.700022 systemd-networkd[1356]: cali62ebf5a272b: Link UP Apr 17 23:40:13.702726 systemd-networkd[1356]: cali62ebf5a272b: Gained carrier Apr 17 23:40:13.748883 containerd[1726]: 2026-04-17 23:40:12.654 [ERROR][4180] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:40:13.748883 containerd[1726]: 2026-04-17 23:40:12.729 [INFO][4180] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--b8c45c9493-k8s-calico--kube--controllers--759d59f7d9--2dqb9-eth0 calico-kube-controllers-759d59f7d9- calico-system 4b1dbce2-5f0d-4778-a7a7-f0ca0b4da675 951 0 2026-04-17 23:39:25 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:759d59f7d9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-b8c45c9493 calico-kube-controllers-759d59f7d9-2dqb9 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali62ebf5a272b [] [] }} ContainerID="edf4de991258cd1491d1411d4efd2919e5892f15100b7a36e8ddd6c56f035acb" Namespace="calico-system" Pod="calico-kube-controllers-759d59f7d9-2dqb9" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-calico--kube--controllers--759d59f7d9--2dqb9-" Apr 17 23:40:13.748883 containerd[1726]: 2026-04-17 23:40:12.729 [INFO][4180] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="edf4de991258cd1491d1411d4efd2919e5892f15100b7a36e8ddd6c56f035acb" Namespace="calico-system" Pod="calico-kube-controllers-759d59f7d9-2dqb9" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-calico--kube--controllers--759d59f7d9--2dqb9-eth0" Apr 17 23:40:13.748883 containerd[1726]: 2026-04-17 23:40:12.889 [INFO][4238] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="edf4de991258cd1491d1411d4efd2919e5892f15100b7a36e8ddd6c56f035acb" HandleID="k8s-pod-network.edf4de991258cd1491d1411d4efd2919e5892f15100b7a36e8ddd6c56f035acb" Workload="ci--4081.3.6--n--b8c45c9493-k8s-calico--kube--controllers--759d59f7d9--2dqb9-eth0" Apr 17 23:40:13.748883 containerd[1726]: 2026-04-17 23:40:12.896 [INFO][4238] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="edf4de991258cd1491d1411d4efd2919e5892f15100b7a36e8ddd6c56f035acb" HandleID="k8s-pod-network.edf4de991258cd1491d1411d4efd2919e5892f15100b7a36e8ddd6c56f035acb" Workload="ci--4081.3.6--n--b8c45c9493-k8s-calico--kube--controllers--759d59f7d9--2dqb9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f310), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-b8c45c9493", "pod":"calico-kube-controllers-759d59f7d9-2dqb9", "timestamp":"2026-04-17 23:40:12.8890046 +0000 UTC"}, Hostname:"ci-4081.3.6-n-b8c45c9493", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004af1e0)} Apr 17 23:40:13.748883 containerd[1726]: 2026-04-17 23:40:12.897 [INFO][4238] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:40:13.748883 containerd[1726]: 2026-04-17 23:40:13.558 [INFO][4238] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:40:13.748883 containerd[1726]: 2026-04-17 23:40:13.558 [INFO][4238] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-b8c45c9493' Apr 17 23:40:13.748883 containerd[1726]: 2026-04-17 23:40:13.570 [INFO][4238] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.edf4de991258cd1491d1411d4efd2919e5892f15100b7a36e8ddd6c56f035acb" host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.748883 containerd[1726]: 2026-04-17 23:40:13.585 [INFO][4238] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.748883 containerd[1726]: 2026-04-17 23:40:13.601 [INFO][4238] ipam/ipam.go 526: Trying affinity for 192.168.13.0/26 host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.748883 containerd[1726]: 2026-04-17 23:40:13.604 [INFO][4238] ipam/ipam.go 160: Attempting to load block cidr=192.168.13.0/26 host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.748883 containerd[1726]: 2026-04-17 23:40:13.609 [INFO][4238] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.13.0/26 host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.748883 containerd[1726]: 2026-04-17 23:40:13.610 [INFO][4238] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.13.0/26 handle="k8s-pod-network.edf4de991258cd1491d1411d4efd2919e5892f15100b7a36e8ddd6c56f035acb" host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.748883 containerd[1726]: 2026-04-17 23:40:13.613 [INFO][4238] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.edf4de991258cd1491d1411d4efd2919e5892f15100b7a36e8ddd6c56f035acb Apr 17 23:40:13.748883 containerd[1726]: 2026-04-17 23:40:13.633 [INFO][4238] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.13.0/26 handle="k8s-pod-network.edf4de991258cd1491d1411d4efd2919e5892f15100b7a36e8ddd6c56f035acb" host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.748883 containerd[1726]: 2026-04-17 23:40:13.669 [INFO][4238] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.13.6/26] block=192.168.13.0/26 handle="k8s-pod-network.edf4de991258cd1491d1411d4efd2919e5892f15100b7a36e8ddd6c56f035acb" host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.748883 containerd[1726]: 2026-04-17 23:40:13.669 [INFO][4238] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.13.6/26] handle="k8s-pod-network.edf4de991258cd1491d1411d4efd2919e5892f15100b7a36e8ddd6c56f035acb" host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:13.748883 containerd[1726]: 2026-04-17 23:40:13.669 [INFO][4238] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:40:13.748883 containerd[1726]: 2026-04-17 23:40:13.669 [INFO][4238] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.13.6/26] IPv6=[] ContainerID="edf4de991258cd1491d1411d4efd2919e5892f15100b7a36e8ddd6c56f035acb" HandleID="k8s-pod-network.edf4de991258cd1491d1411d4efd2919e5892f15100b7a36e8ddd6c56f035acb" Workload="ci--4081.3.6--n--b8c45c9493-k8s-calico--kube--controllers--759d59f7d9--2dqb9-eth0" Apr 17 23:40:13.755073 containerd[1726]: 2026-04-17 23:40:13.677 [INFO][4180] cni-plugin/k8s.go 418: Populated endpoint ContainerID="edf4de991258cd1491d1411d4efd2919e5892f15100b7a36e8ddd6c56f035acb" Namespace="calico-system" Pod="calico-kube-controllers-759d59f7d9-2dqb9" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-calico--kube--controllers--759d59f7d9--2dqb9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--b8c45c9493-k8s-calico--kube--controllers--759d59f7d9--2dqb9-eth0", GenerateName:"calico-kube-controllers-759d59f7d9-", Namespace:"calico-system", SelfLink:"", UID:"4b1dbce2-5f0d-4778-a7a7-f0ca0b4da675", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"759d59f7d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-b8c45c9493", ContainerID:"", Pod:"calico-kube-controllers-759d59f7d9-2dqb9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.13.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali62ebf5a272b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:13.755073 containerd[1726]: 2026-04-17 23:40:13.691 [INFO][4180] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.6/32] ContainerID="edf4de991258cd1491d1411d4efd2919e5892f15100b7a36e8ddd6c56f035acb" Namespace="calico-system" Pod="calico-kube-controllers-759d59f7d9-2dqb9" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-calico--kube--controllers--759d59f7d9--2dqb9-eth0" Apr 17 23:40:13.755073 containerd[1726]: 2026-04-17 23:40:13.691 [INFO][4180] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali62ebf5a272b ContainerID="edf4de991258cd1491d1411d4efd2919e5892f15100b7a36e8ddd6c56f035acb" Namespace="calico-system" Pod="calico-kube-controllers-759d59f7d9-2dqb9" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-calico--kube--controllers--759d59f7d9--2dqb9-eth0" Apr 17 23:40:13.755073 containerd[1726]: 2026-04-17 23:40:13.702 [INFO][4180] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="edf4de991258cd1491d1411d4efd2919e5892f15100b7a36e8ddd6c56f035acb" Namespace="calico-system" Pod="calico-kube-controllers-759d59f7d9-2dqb9" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-calico--kube--controllers--759d59f7d9--2dqb9-eth0" Apr 17 23:40:13.755073 containerd[1726]: 2026-04-17 23:40:13.706 [INFO][4180] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="edf4de991258cd1491d1411d4efd2919e5892f15100b7a36e8ddd6c56f035acb" Namespace="calico-system" Pod="calico-kube-controllers-759d59f7d9-2dqb9" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-calico--kube--controllers--759d59f7d9--2dqb9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--b8c45c9493-k8s-calico--kube--controllers--759d59f7d9--2dqb9-eth0", GenerateName:"calico-kube-controllers-759d59f7d9-", Namespace:"calico-system", SelfLink:"", UID:"4b1dbce2-5f0d-4778-a7a7-f0ca0b4da675", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"759d59f7d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-b8c45c9493", ContainerID:"edf4de991258cd1491d1411d4efd2919e5892f15100b7a36e8ddd6c56f035acb", Pod:"calico-kube-controllers-759d59f7d9-2dqb9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.13.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali62ebf5a272b", MAC:"e2:16:23:33:f9:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:13.755073 containerd[1726]: 2026-04-17 23:40:13.742 [INFO][4180] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="edf4de991258cd1491d1411d4efd2919e5892f15100b7a36e8ddd6c56f035acb" Namespace="calico-system" Pod="calico-kube-controllers-759d59f7d9-2dqb9" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-calico--kube--controllers--759d59f7d9--2dqb9-eth0" Apr 17 23:40:13.766860 systemd[1]: Started cri-containerd-9defa040c174abad9415bb029101ad625b0dda37facbfbefc8873de49fefbeea.scope - libcontainer container 9defa040c174abad9415bb029101ad625b0dda37facbfbefc8873de49fefbeea. Apr 17 23:40:13.773891 containerd[1726]: 2026-04-17 23:40:13.166 [INFO][4289] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732" Apr 17 23:40:13.773891 containerd[1726]: 2026-04-17 23:40:13.166 [INFO][4289] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732" iface="eth0" netns="/var/run/netns/cni-2d2c5eca-92fd-2f2a-8be2-1b1fc14288cb" Apr 17 23:40:13.773891 containerd[1726]: 2026-04-17 23:40:13.167 [INFO][4289] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732" iface="eth0" netns="/var/run/netns/cni-2d2c5eca-92fd-2f2a-8be2-1b1fc14288cb" Apr 17 23:40:13.773891 containerd[1726]: 2026-04-17 23:40:13.167 [INFO][4289] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732" iface="eth0" netns="/var/run/netns/cni-2d2c5eca-92fd-2f2a-8be2-1b1fc14288cb" Apr 17 23:40:13.773891 containerd[1726]: 2026-04-17 23:40:13.167 [INFO][4289] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732" Apr 17 23:40:13.773891 containerd[1726]: 2026-04-17 23:40:13.167 [INFO][4289] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732" Apr 17 23:40:13.773891 containerd[1726]: 2026-04-17 23:40:13.306 [INFO][4338] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732" HandleID="k8s-pod-network.0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732" Workload="ci--4081.3.6--n--b8c45c9493-k8s-csi--node--driver--9fs9x-eth0" Apr 17 23:40:13.773891 containerd[1726]: 2026-04-17 23:40:13.306 [INFO][4338] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:40:13.773891 containerd[1726]: 2026-04-17 23:40:13.680 [INFO][4338] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:40:13.773891 containerd[1726]: 2026-04-17 23:40:13.717 [WARNING][4338] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732" HandleID="k8s-pod-network.0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732" Workload="ci--4081.3.6--n--b8c45c9493-k8s-csi--node--driver--9fs9x-eth0" Apr 17 23:40:13.773891 containerd[1726]: 2026-04-17 23:40:13.724 [INFO][4338] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732" HandleID="k8s-pod-network.0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732" Workload="ci--4081.3.6--n--b8c45c9493-k8s-csi--node--driver--9fs9x-eth0" Apr 17 23:40:13.773891 containerd[1726]: 2026-04-17 23:40:13.737 [INFO][4338] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:40:13.773891 containerd[1726]: 2026-04-17 23:40:13.747 [INFO][4289] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732" Apr 17 23:40:13.780525 containerd[1726]: time="2026-04-17T23:40:13.779066325Z" level=info msg="TearDown network for sandbox \"0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732\" successfully" Apr 17 23:40:13.781848 containerd[1726]: time="2026-04-17T23:40:13.779894232Z" level=info msg="StopPodSandbox for \"0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732\" returns successfully" Apr 17 23:40:13.785714 containerd[1726]: time="2026-04-17T23:40:13.782637457Z" level=info msg="CreateContainer within sandbox \"8dda3fc9aa63c4d1638ad82c661dbccc0c7bc29119b7734d25ab7429b08579f5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"731aeb2b5bc68b197079b9dbdbd29a0936f3ac3c009fba92576c82b34cd05ad7\"" Apr 17 23:40:13.790678 containerd[1726]: time="2026-04-17T23:40:13.786872596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9fs9x,Uid:a52a9606-2487-4d0a-8d3d-112a3887d0ee,Namespace:calico-system,Attempt:1,}" Apr 17 23:40:13.796968 containerd[1726]: time="2026-04-17T23:40:13.796934088Z" level=info msg="StartContainer for \"731aeb2b5bc68b197079b9dbdbd29a0936f3ac3c009fba92576c82b34cd05ad7\"" Apr 17 23:40:13.832251 containerd[1726]: 2026-04-17 23:40:13.210 [INFO][4279] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2" Apr 17 23:40:13.832251 containerd[1726]: 2026-04-17 23:40:13.210 [INFO][4279] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2" iface="eth0" netns="/var/run/netns/cni-1fc7ba26-1943-5668-1bf1-eb6a3d4602c6" Apr 17 23:40:13.832251 containerd[1726]: 2026-04-17 23:40:13.210 [INFO][4279] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2" iface="eth0" netns="/var/run/netns/cni-1fc7ba26-1943-5668-1bf1-eb6a3d4602c6" Apr 17 23:40:13.832251 containerd[1726]: 2026-04-17 23:40:13.219 [INFO][4279] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2" iface="eth0" netns="/var/run/netns/cni-1fc7ba26-1943-5668-1bf1-eb6a3d4602c6" Apr 17 23:40:13.832251 containerd[1726]: 2026-04-17 23:40:13.219 [INFO][4279] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2" Apr 17 23:40:13.832251 containerd[1726]: 2026-04-17 23:40:13.219 [INFO][4279] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2" Apr 17 23:40:13.832251 containerd[1726]: 2026-04-17 23:40:13.388 [INFO][4359] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2" HandleID="k8s-pod-network.3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2" Workload="ci--4081.3.6--n--b8c45c9493-k8s-whisker--596447fcf--5kzpg-eth0" Apr 17 23:40:13.832251 containerd[1726]: 2026-04-17 23:40:13.388 [INFO][4359] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:40:13.832251 containerd[1726]: 2026-04-17 23:40:13.737 [INFO][4359] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:40:13.832251 containerd[1726]: 2026-04-17 23:40:13.779 [WARNING][4359] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2" HandleID="k8s-pod-network.3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2" Workload="ci--4081.3.6--n--b8c45c9493-k8s-whisker--596447fcf--5kzpg-eth0" Apr 17 23:40:13.832251 containerd[1726]: 2026-04-17 23:40:13.781 [INFO][4359] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2" HandleID="k8s-pod-network.3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2" Workload="ci--4081.3.6--n--b8c45c9493-k8s-whisker--596447fcf--5kzpg-eth0" Apr 17 23:40:13.832251 containerd[1726]: 2026-04-17 23:40:13.797 [INFO][4359] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:40:13.832251 containerd[1726]: 2026-04-17 23:40:13.826 [INFO][4279] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2" Apr 17 23:40:13.833776 containerd[1726]: time="2026-04-17T23:40:13.833739124Z" level=info msg="TearDown network for sandbox \"3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2\" successfully" Apr 17 23:40:13.834015 containerd[1726]: time="2026-04-17T23:40:13.833988226Z" level=info msg="StopPodSandbox for \"3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2\" returns successfully" Apr 17 23:40:13.841768 containerd[1726]: time="2026-04-17T23:40:13.841393694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:40:13.841768 containerd[1726]: time="2026-04-17T23:40:13.841462294Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:40:13.841768 containerd[1726]: time="2026-04-17T23:40:13.841479094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:40:13.841768 containerd[1726]: time="2026-04-17T23:40:13.841566595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:40:13.847855 containerd[1726]: time="2026-04-17T23:40:13.847827652Z" level=info msg="StartContainer for \"bc3dcf6e9db4b50211e9e044dcba1de813edae9f88d229ee181ec62ff629744f\" returns successfully" Apr 17 23:40:13.851876 kubelet[3186]: I0417 23:40:13.851848 3186 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgmkr\" (UniqueName: \"kubernetes.io/projected/6881ecfb-ba07-4f07-9908-f7af5cb84913-kube-api-access-vgmkr\") pod \"6881ecfb-ba07-4f07-9908-f7af5cb84913\" (UID: \"6881ecfb-ba07-4f07-9908-f7af5cb84913\") " Apr 17 23:40:13.853094 kubelet[3186]: I0417 23:40:13.853072 3186 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6881ecfb-ba07-4f07-9908-f7af5cb84913-whisker-backend-key-pair\") pod \"6881ecfb-ba07-4f07-9908-f7af5cb84913\" (UID: \"6881ecfb-ba07-4f07-9908-f7af5cb84913\") " Apr 17 23:40:13.857788 kubelet[3186]: I0417 23:40:13.857766 3186 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6881ecfb-ba07-4f07-9908-f7af5cb84913-whisker-ca-bundle\") pod \"6881ecfb-ba07-4f07-9908-f7af5cb84913\" (UID: \"6881ecfb-ba07-4f07-9908-f7af5cb84913\") " Apr 17 23:40:13.858031 kubelet[3186]: I0417 23:40:13.858014 3186 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/6881ecfb-ba07-4f07-9908-f7af5cb84913-nginx-config\") pod \"6881ecfb-ba07-4f07-9908-f7af5cb84913\" (UID: \"6881ecfb-ba07-4f07-9908-f7af5cb84913\") " Apr 17 23:40:13.864582 kubelet[3186]: I0417 23:40:13.863636 3186 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6881ecfb-ba07-4f07-9908-f7af5cb84913-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "6881ecfb-ba07-4f07-9908-f7af5cb84913" (UID: "6881ecfb-ba07-4f07-9908-f7af5cb84913"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:40:13.867165 kubelet[3186]: I0417 23:40:13.867132 3186 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6881ecfb-ba07-4f07-9908-f7af5cb84913-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "6881ecfb-ba07-4f07-9908-f7af5cb84913" (UID: "6881ecfb-ba07-4f07-9908-f7af5cb84913"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:40:13.869326 kubelet[3186]: I0417 23:40:13.869276 3186 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6881ecfb-ba07-4f07-9908-f7af5cb84913-kube-api-access-vgmkr" (OuterVolumeSpecName: "kube-api-access-vgmkr") pod "6881ecfb-ba07-4f07-9908-f7af5cb84913" (UID: "6881ecfb-ba07-4f07-9908-f7af5cb84913"). InnerVolumeSpecName "kube-api-access-vgmkr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 23:40:13.870715 kubelet[3186]: I0417 23:40:13.869747 3186 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6881ecfb-ba07-4f07-9908-f7af5cb84913-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "6881ecfb-ba07-4f07-9908-f7af5cb84913" (UID: "6881ecfb-ba07-4f07-9908-f7af5cb84913"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 23:40:13.914454 containerd[1726]: time="2026-04-17T23:40:13.913867555Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:40:13.914454 containerd[1726]: time="2026-04-17T23:40:13.913940056Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:40:13.914454 containerd[1726]: time="2026-04-17T23:40:13.913965556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:40:13.914454 containerd[1726]: time="2026-04-17T23:40:13.914070757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:40:13.948307 systemd[1]: run-netns-cni\x2d1fc7ba26\x2d1943\x2d5668\x2d1bf1\x2deb6a3d4602c6.mount: Deactivated successfully. Apr 17 23:40:13.948434 systemd[1]: var-lib-kubelet-pods-6881ecfb\x2dba07\x2d4f07\x2d9908\x2df7af5cb84913-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvgmkr.mount: Deactivated successfully. Apr 17 23:40:13.948520 systemd[1]: var-lib-kubelet-pods-6881ecfb\x2dba07\x2d4f07\x2d9908\x2df7af5cb84913-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 17 23:40:13.948605 systemd[1]: run-netns-cni\x2d2d2c5eca\x2d92fd\x2d2f2a\x2d8be2\x2d1b1fc14288cb.mount: Deactivated successfully. Apr 17 23:40:13.949749 containerd[1726]: time="2026-04-17T23:40:13.949593581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67d9f5f86b-j8d8z,Uid:bfb4688d-986d-4ce5-9339-fe3b9c2a1572,Namespace:calico-system,Attempt:0,} returns sandbox id \"d40f48d64acdd2b7cfdf2404c1f61687b41e6215b8548d7e14c8ce501dc07ff7\"" Apr 17 23:40:13.955747 containerd[1726]: time="2026-04-17T23:40:13.955712237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 17 23:40:13.959306 kubelet[3186]: I0417 23:40:13.959280 3186 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6881ecfb-ba07-4f07-9908-f7af5cb84913-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-b8c45c9493\" DevicePath \"\"" Apr 17 23:40:13.959731 kubelet[3186]: I0417 23:40:13.959708 3186 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6881ecfb-ba07-4f07-9908-f7af5cb84913-whisker-ca-bundle\") on node \"ci-4081.3.6-n-b8c45c9493\" DevicePath \"\"" Apr 17 23:40:13.959894 kubelet[3186]: I0417 23:40:13.959853 3186 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/6881ecfb-ba07-4f07-9908-f7af5cb84913-nginx-config\") on node \"ci-4081.3.6-n-b8c45c9493\" DevicePath \"\"" Apr 17 23:40:13.959894 kubelet[3186]: I0417 23:40:13.959874 3186 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vgmkr\" (UniqueName: \"kubernetes.io/projected/6881ecfb-ba07-4f07-9908-f7af5cb84913-kube-api-access-vgmkr\") on node \"ci-4081.3.6-n-b8c45c9493\" DevicePath \"\"" Apr 17 23:40:13.972399 containerd[1726]: time="2026-04-17T23:40:13.972362889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-lchr8,Uid:89036b56-0e46-4839-b92f-5b8cf483ee20,Namespace:calico-system,Attempt:0,} returns sandbox id \"9defa040c174abad9415bb029101ad625b0dda37facbfbefc8873de49fefbeea\"" Apr 17 23:40:13.980168 systemd[1]: Started cri-containerd-f1dadf4913d10bc7f14a79dc4e9d9ed372a5e480828a9bd38adabbcab7d97ec3.scope - libcontainer container f1dadf4913d10bc7f14a79dc4e9d9ed372a5e480828a9bd38adabbcab7d97ec3. Apr 17 23:40:14.032838 systemd[1]: Started cri-containerd-731aeb2b5bc68b197079b9dbdbd29a0936f3ac3c009fba92576c82b34cd05ad7.scope - libcontainer container 731aeb2b5bc68b197079b9dbdbd29a0936f3ac3c009fba92576c82b34cd05ad7. Apr 17 23:40:14.038058 systemd[1]: Started cri-containerd-edf4de991258cd1491d1411d4efd2919e5892f15100b7a36e8ddd6c56f035acb.scope - libcontainer container edf4de991258cd1491d1411d4efd2919e5892f15100b7a36e8ddd6c56f035acb. Apr 17 23:40:14.052470 systemd[1]: Removed slice kubepods-besteffort-pod6881ecfb_ba07_4f07_9908_f7af5cb84913.slice - libcontainer container kubepods-besteffort-pod6881ecfb_ba07_4f07_9908_f7af5cb84913.slice. Apr 17 23:40:14.129972 containerd[1726]: time="2026-04-17T23:40:14.129208621Z" level=info msg="StartContainer for \"731aeb2b5bc68b197079b9dbdbd29a0936f3ac3c009fba92576c82b34cd05ad7\" returns successfully" Apr 17 23:40:14.134883 kubelet[3186]: I0417 23:40:14.131917 3186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-6snhn" podStartSLOduration=72.131893145 podStartE2EDuration="1m12.131893145s" podCreationTimestamp="2026-04-17 23:39:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:40:14.073772615 +0000 UTC m=+72.455608875" watchObservedRunningTime="2026-04-17 23:40:14.131893145 +0000 UTC m=+72.513729405" Apr 17 23:40:14.135900 systemd[1]: run-containerd-runc-k8s.io-a73085b174a779f2571185bf1cff650d25ceca4ed6fcf5fd131094b7c0b8a498-runc.XTnbAm.mount: Deactivated successfully. Apr 17 23:40:14.239750 systemd[1]: Created slice kubepods-besteffort-podeb05e89c_cb0d_408f_a7fb_e4735716bf6d.slice - libcontainer container kubepods-besteffort-podeb05e89c_cb0d_408f_a7fb_e4735716bf6d.slice. Apr 17 23:40:14.262867 kubelet[3186]: I0417 23:40:14.262705 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/eb05e89c-cb0d-408f-a7fb-e4735716bf6d-nginx-config\") pod \"whisker-fd85d588d-2xb7s\" (UID: \"eb05e89c-cb0d-408f-a7fb-e4735716bf6d\") " pod="calico-system/whisker-fd85d588d-2xb7s" Apr 17 23:40:14.263254 kubelet[3186]: I0417 23:40:14.263088 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb05e89c-cb0d-408f-a7fb-e4735716bf6d-whisker-ca-bundle\") pod \"whisker-fd85d588d-2xb7s\" (UID: \"eb05e89c-cb0d-408f-a7fb-e4735716bf6d\") " pod="calico-system/whisker-fd85d588d-2xb7s" Apr 17 23:40:14.263254 kubelet[3186]: I0417 23:40:14.263136 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/eb05e89c-cb0d-408f-a7fb-e4735716bf6d-whisker-backend-key-pair\") pod \"whisker-fd85d588d-2xb7s\" (UID: \"eb05e89c-cb0d-408f-a7fb-e4735716bf6d\") " pod="calico-system/whisker-fd85d588d-2xb7s" Apr 17 23:40:14.263254 kubelet[3186]: I0417 23:40:14.263168 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2xrj\" (UniqueName: \"kubernetes.io/projected/eb05e89c-cb0d-408f-a7fb-e4735716bf6d-kube-api-access-g2xrj\") pod \"whisker-fd85d588d-2xb7s\" (UID: \"eb05e89c-cb0d-408f-a7fb-e4735716bf6d\") " pod="calico-system/whisker-fd85d588d-2xb7s" Apr 17 23:40:14.300370 containerd[1726]: time="2026-04-17T23:40:14.299283973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67d9f5f86b-vmsph,Uid:17cb0cd8-e168-42c7-8bf5-bf6f746d1982,Namespace:calico-system,Attempt:0,} returns sandbox id \"f1dadf4913d10bc7f14a79dc4e9d9ed372a5e480828a9bd38adabbcab7d97ec3\"" Apr 17 23:40:14.359231 systemd-networkd[1356]: calic6d93488e9c: Link UP Apr 17 23:40:14.359468 systemd-networkd[1356]: calic6d93488e9c: Gained carrier Apr 17 23:40:14.369352 containerd[1726]: time="2026-04-17T23:40:14.369206511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-759d59f7d9-2dqb9,Uid:4b1dbce2-5f0d-4778-a7a7-f0ca0b4da675,Namespace:calico-system,Attempt:0,} returns sandbox id \"edf4de991258cd1491d1411d4efd2919e5892f15100b7a36e8ddd6c56f035acb\"" Apr 17 23:40:14.404037 containerd[1726]: 2026-04-17 23:40:14.010 [ERROR][4634] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:40:14.404037 containerd[1726]: 2026-04-17 23:40:14.084 [INFO][4634] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--b8c45c9493-k8s-csi--node--driver--9fs9x-eth0 csi-node-driver- calico-system a52a9606-2487-4d0a-8d3d-112a3887d0ee 985 0 2026-04-17 23:39:25 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-b8c45c9493 csi-node-driver-9fs9x eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic6d93488e9c [] [] }} ContainerID="610faa707a788dddb978841173b4064aeba595938f8177bdf3f04e16fc31c147" Namespace="calico-system" Pod="csi-node-driver-9fs9x" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-csi--node--driver--9fs9x-" Apr 17 23:40:14.404037 containerd[1726]: 2026-04-17 23:40:14.084 [INFO][4634] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="610faa707a788dddb978841173b4064aeba595938f8177bdf3f04e16fc31c147" Namespace="calico-system" Pod="csi-node-driver-9fs9x" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-csi--node--driver--9fs9x-eth0" Apr 17 23:40:14.404037 containerd[1726]: 2026-04-17 23:40:14.188 [INFO][4704] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="610faa707a788dddb978841173b4064aeba595938f8177bdf3f04e16fc31c147" HandleID="k8s-pod-network.610faa707a788dddb978841173b4064aeba595938f8177bdf3f04e16fc31c147" Workload="ci--4081.3.6--n--b8c45c9493-k8s-csi--node--driver--9fs9x-eth0" Apr 17 23:40:14.404037 containerd[1726]: 2026-04-17 23:40:14.207 [INFO][4704] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="610faa707a788dddb978841173b4064aeba595938f8177bdf3f04e16fc31c147" HandleID="k8s-pod-network.610faa707a788dddb978841173b4064aeba595938f8177bdf3f04e16fc31c147" Workload="ci--4081.3.6--n--b8c45c9493-k8s-csi--node--driver--9fs9x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f0b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-b8c45c9493", "pod":"csi-node-driver-9fs9x", "timestamp":"2026-04-17 23:40:14.188886265 +0000 UTC"}, Hostname:"ci-4081.3.6-n-b8c45c9493", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000188c60)} Apr 17 23:40:14.404037 containerd[1726]: 2026-04-17 23:40:14.207 [INFO][4704] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:40:14.404037 containerd[1726]: 2026-04-17 23:40:14.208 [INFO][4704] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:40:14.404037 containerd[1726]: 2026-04-17 23:40:14.208 [INFO][4704] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-b8c45c9493' Apr 17 23:40:14.404037 containerd[1726]: 2026-04-17 23:40:14.257 [INFO][4704] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.610faa707a788dddb978841173b4064aeba595938f8177bdf3f04e16fc31c147" host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:14.404037 containerd[1726]: 2026-04-17 23:40:14.282 [INFO][4704] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:14.404037 containerd[1726]: 2026-04-17 23:40:14.302 [INFO][4704] ipam/ipam.go 526: Trying affinity for 192.168.13.0/26 host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:14.404037 containerd[1726]: 2026-04-17 23:40:14.308 [INFO][4704] ipam/ipam.go 160: Attempting to load block cidr=192.168.13.0/26 host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:14.404037 containerd[1726]: 2026-04-17 23:40:14.315 [INFO][4704] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.13.0/26 host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:14.404037 containerd[1726]: 2026-04-17 23:40:14.315 [INFO][4704] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.13.0/26 handle="k8s-pod-network.610faa707a788dddb978841173b4064aeba595938f8177bdf3f04e16fc31c147" host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:14.404037 containerd[1726]: 2026-04-17 23:40:14.319 [INFO][4704] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.610faa707a788dddb978841173b4064aeba595938f8177bdf3f04e16fc31c147 Apr 17 23:40:14.404037 containerd[1726]: 2026-04-17 23:40:14.329 [INFO][4704] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.13.0/26 handle="k8s-pod-network.610faa707a788dddb978841173b4064aeba595938f8177bdf3f04e16fc31c147" host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:14.404037 containerd[1726]: 2026-04-17 23:40:14.353 [INFO][4704] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.13.7/26] block=192.168.13.0/26 handle="k8s-pod-network.610faa707a788dddb978841173b4064aeba595938f8177bdf3f04e16fc31c147" host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:14.404037 containerd[1726]: 2026-04-17 23:40:14.354 [INFO][4704] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.13.7/26] handle="k8s-pod-network.610faa707a788dddb978841173b4064aeba595938f8177bdf3f04e16fc31c147" host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:14.404037 containerd[1726]: 2026-04-17 23:40:14.354 [INFO][4704] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:40:14.404037 containerd[1726]: 2026-04-17 23:40:14.354 [INFO][4704] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.13.7/26] IPv6=[] ContainerID="610faa707a788dddb978841173b4064aeba595938f8177bdf3f04e16fc31c147" HandleID="k8s-pod-network.610faa707a788dddb978841173b4064aeba595938f8177bdf3f04e16fc31c147" Workload="ci--4081.3.6--n--b8c45c9493-k8s-csi--node--driver--9fs9x-eth0" Apr 17 23:40:14.405018 containerd[1726]: 2026-04-17 23:40:14.357 [INFO][4634] cni-plugin/k8s.go 418: Populated endpoint ContainerID="610faa707a788dddb978841173b4064aeba595938f8177bdf3f04e16fc31c147" Namespace="calico-system" Pod="csi-node-driver-9fs9x" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-csi--node--driver--9fs9x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--b8c45c9493-k8s-csi--node--driver--9fs9x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a52a9606-2487-4d0a-8d3d-112a3887d0ee", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-b8c45c9493", ContainerID:"", Pod:"csi-node-driver-9fs9x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.13.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic6d93488e9c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:14.405018 containerd[1726]: 2026-04-17 23:40:14.357 [INFO][4634] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.7/32] ContainerID="610faa707a788dddb978841173b4064aeba595938f8177bdf3f04e16fc31c147" Namespace="calico-system" Pod="csi-node-driver-9fs9x" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-csi--node--driver--9fs9x-eth0" Apr 17 23:40:14.405018 containerd[1726]: 2026-04-17 23:40:14.357 [INFO][4634] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic6d93488e9c ContainerID="610faa707a788dddb978841173b4064aeba595938f8177bdf3f04e16fc31c147" Namespace="calico-system" Pod="csi-node-driver-9fs9x" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-csi--node--driver--9fs9x-eth0" Apr 17 23:40:14.405018 containerd[1726]: 2026-04-17 23:40:14.360 [INFO][4634] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="610faa707a788dddb978841173b4064aeba595938f8177bdf3f04e16fc31c147" Namespace="calico-system" Pod="csi-node-driver-9fs9x" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-csi--node--driver--9fs9x-eth0" Apr 17 23:40:14.405018 containerd[1726]: 2026-04-17 23:40:14.362 [INFO][4634] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="610faa707a788dddb978841173b4064aeba595938f8177bdf3f04e16fc31c147" Namespace="calico-system" Pod="csi-node-driver-9fs9x" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-csi--node--driver--9fs9x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--b8c45c9493-k8s-csi--node--driver--9fs9x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a52a9606-2487-4d0a-8d3d-112a3887d0ee", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-b8c45c9493", ContainerID:"610faa707a788dddb978841173b4064aeba595938f8177bdf3f04e16fc31c147", Pod:"csi-node-driver-9fs9x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.13.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic6d93488e9c", MAC:"9a:71:84:fb:db:c4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:14.405018 containerd[1726]: 2026-04-17 23:40:14.398 [INFO][4634] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="610faa707a788dddb978841173b4064aeba595938f8177bdf3f04e16fc31c147" Namespace="calico-system" Pod="csi-node-driver-9fs9x" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-csi--node--driver--9fs9x-eth0" Apr 17 23:40:14.443590 containerd[1726]: time="2026-04-17T23:40:14.443445889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:40:14.443872 containerd[1726]: time="2026-04-17T23:40:14.443777692Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:40:14.443872 containerd[1726]: time="2026-04-17T23:40:14.443801992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:40:14.444149 containerd[1726]: time="2026-04-17T23:40:14.444052094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:40:14.460839 systemd[1]: Started cri-containerd-610faa707a788dddb978841173b4064aeba595938f8177bdf3f04e16fc31c147.scope - libcontainer container 610faa707a788dddb978841173b4064aeba595938f8177bdf3f04e16fc31c147. Apr 17 23:40:14.484476 containerd[1726]: time="2026-04-17T23:40:14.484437563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9fs9x,Uid:a52a9606-2487-4d0a-8d3d-112a3887d0ee,Namespace:calico-system,Attempt:1,} returns sandbox id \"610faa707a788dddb978841173b4064aeba595938f8177bdf3f04e16fc31c147\"" Apr 17 23:40:14.544185 containerd[1726]: time="2026-04-17T23:40:14.544128408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-fd85d588d-2xb7s,Uid:eb05e89c-cb0d-408f-a7fb-e4735716bf6d,Namespace:calico-system,Attempt:0,}" Apr 17 23:40:14.636890 systemd-networkd[1356]: cali77f5437964d: Gained IPv6LL Apr 17 23:40:14.687093 systemd-networkd[1356]: calif1e290f41a6: Link UP Apr 17 23:40:14.687356 systemd-networkd[1356]: calif1e290f41a6: Gained carrier Apr 17 23:40:14.703892 containerd[1726]: 2026-04-17 23:40:14.602 [ERROR][4797] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:40:14.703892 containerd[1726]: 2026-04-17 23:40:14.613 [INFO][4797] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--b8c45c9493-k8s-whisker--fd85d588d--2xb7s-eth0 whisker-fd85d588d- calico-system eb05e89c-cb0d-408f-a7fb-e4735716bf6d 1036 0 2026-04-17 23:40:14 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:fd85d588d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-b8c45c9493 whisker-fd85d588d-2xb7s eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calif1e290f41a6 [] [] }} ContainerID="abae41d1874a52bbe44e8bc5a5daf382d0e71a604a6e37c464d77450b777a811" Namespace="calico-system" Pod="whisker-fd85d588d-2xb7s" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-whisker--fd85d588d--2xb7s-" Apr 17 23:40:14.703892 containerd[1726]: 2026-04-17 23:40:14.613 [INFO][4797] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="abae41d1874a52bbe44e8bc5a5daf382d0e71a604a6e37c464d77450b777a811" Namespace="calico-system" Pod="whisker-fd85d588d-2xb7s" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-whisker--fd85d588d--2xb7s-eth0" Apr 17 23:40:14.703892 containerd[1726]: 2026-04-17 23:40:14.641 [INFO][4810] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="abae41d1874a52bbe44e8bc5a5daf382d0e71a604a6e37c464d77450b777a811" HandleID="k8s-pod-network.abae41d1874a52bbe44e8bc5a5daf382d0e71a604a6e37c464d77450b777a811" Workload="ci--4081.3.6--n--b8c45c9493-k8s-whisker--fd85d588d--2xb7s-eth0" Apr 17 23:40:14.703892 containerd[1726]: 2026-04-17 23:40:14.647 [INFO][4810] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="abae41d1874a52bbe44e8bc5a5daf382d0e71a604a6e37c464d77450b777a811" HandleID="k8s-pod-network.abae41d1874a52bbe44e8bc5a5daf382d0e71a604a6e37c464d77450b777a811" Workload="ci--4081.3.6--n--b8c45c9493-k8s-whisker--fd85d588d--2xb7s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fde80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-b8c45c9493", "pod":"whisker-fd85d588d-2xb7s", "timestamp":"2026-04-17 23:40:14.641420496 +0000 UTC"}, Hostname:"ci-4081.3.6-n-b8c45c9493", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002bef20)} Apr 17 23:40:14.703892 containerd[1726]: 2026-04-17 23:40:14.647 [INFO][4810] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:40:14.703892 containerd[1726]: 2026-04-17 23:40:14.648 [INFO][4810] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:40:14.703892 containerd[1726]: 2026-04-17 23:40:14.648 [INFO][4810] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-b8c45c9493' Apr 17 23:40:14.703892 containerd[1726]: 2026-04-17 23:40:14.650 [INFO][4810] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.abae41d1874a52bbe44e8bc5a5daf382d0e71a604a6e37c464d77450b777a811" host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:14.703892 containerd[1726]: 2026-04-17 23:40:14.654 [INFO][4810] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:14.703892 containerd[1726]: 2026-04-17 23:40:14.658 [INFO][4810] ipam/ipam.go 526: Trying affinity for 192.168.13.0/26 host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:14.703892 containerd[1726]: 2026-04-17 23:40:14.659 [INFO][4810] ipam/ipam.go 160: Attempting to load block cidr=192.168.13.0/26 host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:14.703892 containerd[1726]: 2026-04-17 23:40:14.661 [INFO][4810] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.13.0/26 host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:14.703892 containerd[1726]: 2026-04-17 23:40:14.661 [INFO][4810] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.13.0/26 handle="k8s-pod-network.abae41d1874a52bbe44e8bc5a5daf382d0e71a604a6e37c464d77450b777a811" host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:14.703892 containerd[1726]: 2026-04-17 23:40:14.663 [INFO][4810] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.abae41d1874a52bbe44e8bc5a5daf382d0e71a604a6e37c464d77450b777a811 Apr 17 23:40:14.703892 containerd[1726]: 2026-04-17 23:40:14.672 [INFO][4810] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.13.0/26 handle="k8s-pod-network.abae41d1874a52bbe44e8bc5a5daf382d0e71a604a6e37c464d77450b777a811" host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:14.703892 containerd[1726]: 2026-04-17 23:40:14.682 [INFO][4810] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.13.8/26] block=192.168.13.0/26 handle="k8s-pod-network.abae41d1874a52bbe44e8bc5a5daf382d0e71a604a6e37c464d77450b777a811" host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:14.703892 containerd[1726]: 2026-04-17 23:40:14.682 [INFO][4810] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.13.8/26] handle="k8s-pod-network.abae41d1874a52bbe44e8bc5a5daf382d0e71a604a6e37c464d77450b777a811" host="ci-4081.3.6-n-b8c45c9493" Apr 17 23:40:14.703892 containerd[1726]: 2026-04-17 23:40:14.682 [INFO][4810] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:40:14.703892 containerd[1726]: 2026-04-17 23:40:14.682 [INFO][4810] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.13.8/26] IPv6=[] ContainerID="abae41d1874a52bbe44e8bc5a5daf382d0e71a604a6e37c464d77450b777a811" HandleID="k8s-pod-network.abae41d1874a52bbe44e8bc5a5daf382d0e71a604a6e37c464d77450b777a811" Workload="ci--4081.3.6--n--b8c45c9493-k8s-whisker--fd85d588d--2xb7s-eth0" Apr 17 23:40:14.705032 containerd[1726]: 2026-04-17 23:40:14.684 [INFO][4797] cni-plugin/k8s.go 418: Populated endpoint ContainerID="abae41d1874a52bbe44e8bc5a5daf382d0e71a604a6e37c464d77450b777a811" Namespace="calico-system" Pod="whisker-fd85d588d-2xb7s" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-whisker--fd85d588d--2xb7s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--b8c45c9493-k8s-whisker--fd85d588d--2xb7s-eth0", GenerateName:"whisker-fd85d588d-", Namespace:"calico-system", SelfLink:"", UID:"eb05e89c-cb0d-408f-a7fb-e4735716bf6d", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 40, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"fd85d588d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-b8c45c9493", ContainerID:"", Pod:"whisker-fd85d588d-2xb7s", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.13.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif1e290f41a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:14.705032 containerd[1726]: 2026-04-17 23:40:14.684 [INFO][4797] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.8/32] ContainerID="abae41d1874a52bbe44e8bc5a5daf382d0e71a604a6e37c464d77450b777a811" Namespace="calico-system" Pod="whisker-fd85d588d-2xb7s" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-whisker--fd85d588d--2xb7s-eth0" Apr 17 23:40:14.705032 containerd[1726]: 2026-04-17 23:40:14.684 [INFO][4797] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif1e290f41a6 ContainerID="abae41d1874a52bbe44e8bc5a5daf382d0e71a604a6e37c464d77450b777a811" Namespace="calico-system" Pod="whisker-fd85d588d-2xb7s" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-whisker--fd85d588d--2xb7s-eth0" Apr 17 23:40:14.705032 containerd[1726]: 2026-04-17 23:40:14.686 [INFO][4797] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="abae41d1874a52bbe44e8bc5a5daf382d0e71a604a6e37c464d77450b777a811" Namespace="calico-system" Pod="whisker-fd85d588d-2xb7s" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-whisker--fd85d588d--2xb7s-eth0" Apr 17 23:40:14.705032 containerd[1726]: 2026-04-17 23:40:14.686 [INFO][4797] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="abae41d1874a52bbe44e8bc5a5daf382d0e71a604a6e37c464d77450b777a811" Namespace="calico-system" Pod="whisker-fd85d588d-2xb7s" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-whisker--fd85d588d--2xb7s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--b8c45c9493-k8s-whisker--fd85d588d--2xb7s-eth0", GenerateName:"whisker-fd85d588d-", Namespace:"calico-system", SelfLink:"", UID:"eb05e89c-cb0d-408f-a7fb-e4735716bf6d", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 40, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"fd85d588d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-b8c45c9493", ContainerID:"abae41d1874a52bbe44e8bc5a5daf382d0e71a604a6e37c464d77450b777a811", Pod:"whisker-fd85d588d-2xb7s", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.13.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif1e290f41a6", MAC:"36:87:20:19:75:35", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:40:14.705032 containerd[1726]: 2026-04-17 23:40:14.700 [INFO][4797] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="abae41d1874a52bbe44e8bc5a5daf382d0e71a604a6e37c464d77450b777a811" Namespace="calico-system" Pod="whisker-fd85d588d-2xb7s" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-whisker--fd85d588d--2xb7s-eth0" Apr 17 23:40:14.728618 containerd[1726]: time="2026-04-17T23:40:14.728259189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:40:14.728618 containerd[1726]: time="2026-04-17T23:40:14.728404590Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:40:14.728618 containerd[1726]: time="2026-04-17T23:40:14.728460490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:40:14.728618 containerd[1726]: time="2026-04-17T23:40:14.728563691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:40:14.749857 systemd[1]: Started cri-containerd-abae41d1874a52bbe44e8bc5a5daf382d0e71a604a6e37c464d77450b777a811.scope - libcontainer container abae41d1874a52bbe44e8bc5a5daf382d0e71a604a6e37c464d77450b777a811. Apr 17 23:40:14.764867 systemd-networkd[1356]: cali752fbce792f: Gained IPv6LL Apr 17 23:40:14.827013 containerd[1726]: time="2026-04-17T23:40:14.826938389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-fd85d588d-2xb7s,Uid:eb05e89c-cb0d-408f-a7fb-e4735716bf6d,Namespace:calico-system,Attempt:0,} returns sandbox id \"abae41d1874a52bbe44e8bc5a5daf382d0e71a604a6e37c464d77450b777a811\"" Apr 17 23:40:15.020880 systemd-networkd[1356]: calia271cd3a4de: Gained IPv6LL Apr 17 23:40:15.021266 systemd-networkd[1356]: cali62ebf5a272b: Gained IPv6LL Apr 17 23:40:15.084935 systemd-networkd[1356]: calie13d7c28cd5: Gained IPv6LL Apr 17 23:40:15.114679 kubelet[3186]: I0417 23:40:15.113148 3186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-sbk8d" podStartSLOduration=72.113121501 podStartE2EDuration="1m12.113121501s" podCreationTimestamp="2026-04-17 23:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:40:15.082560123 +0000 UTC m=+73.464396483" watchObservedRunningTime="2026-04-17 23:40:15.113121501 +0000 UTC m=+73.494957761" Apr 17 23:40:15.213557 systemd-networkd[1356]: calid118559f5d0: Gained IPv6LL Apr 17 23:40:15.345752 kernel: calico-node[4923]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 17 23:40:15.761628 kubelet[3186]: I0417 23:40:15.761497 3186 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6881ecfb-ba07-4f07-9908-f7af5cb84913" path="/var/lib/kubelet/pods/6881ecfb-ba07-4f07-9908-f7af5cb84913/volumes" Apr 17 23:40:15.980843 systemd-networkd[1356]: calic6d93488e9c: Gained IPv6LL Apr 17 23:40:16.044881 systemd-networkd[1356]: calif1e290f41a6: Gained IPv6LL Apr 17 23:40:16.602160 systemd-networkd[1356]: vxlan.calico: Link UP Apr 17 23:40:16.602170 systemd-networkd[1356]: vxlan.calico: Gained carrier Apr 17 23:40:17.603898 containerd[1726]: time="2026-04-17T23:40:17.603787236Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:17.608416 containerd[1726]: time="2026-04-17T23:40:17.608245376Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 17 23:40:17.612684 containerd[1726]: time="2026-04-17T23:40:17.612597116Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:17.617001 containerd[1726]: time="2026-04-17T23:40:17.616945056Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:17.618217 containerd[1726]: time="2026-04-17T23:40:17.617634462Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 3.661726023s" Apr 17 23:40:17.618217 containerd[1726]: time="2026-04-17T23:40:17.617693062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 17 23:40:17.618964 containerd[1726]: time="2026-04-17T23:40:17.618939674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 17 23:40:17.628442 containerd[1726]: time="2026-04-17T23:40:17.628408960Z" level=info msg="CreateContainer within sandbox \"d40f48d64acdd2b7cfdf2404c1f61687b41e6215b8548d7e14c8ce501dc07ff7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 23:40:17.668254 containerd[1726]: time="2026-04-17T23:40:17.668198323Z" level=info msg="CreateContainer within sandbox \"d40f48d64acdd2b7cfdf2404c1f61687b41e6215b8548d7e14c8ce501dc07ff7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8b7a98c6a0b855c336756e82d04382fbe20dc125ca42cafe99ee5707ad21a85a\"" Apr 17 23:40:17.668935 containerd[1726]: time="2026-04-17T23:40:17.668845129Z" level=info msg="StartContainer for \"8b7a98c6a0b855c336756e82d04382fbe20dc125ca42cafe99ee5707ad21a85a\"" Apr 17 23:40:17.702663 systemd[1]: run-containerd-runc-k8s.io-8b7a98c6a0b855c336756e82d04382fbe20dc125ca42cafe99ee5707ad21a85a-runc.uwCNSl.mount: Deactivated successfully. Apr 17 23:40:17.712817 systemd[1]: Started cri-containerd-8b7a98c6a0b855c336756e82d04382fbe20dc125ca42cafe99ee5707ad21a85a.scope - libcontainer container 8b7a98c6a0b855c336756e82d04382fbe20dc125ca42cafe99ee5707ad21a85a. Apr 17 23:40:17.759962 containerd[1726]: time="2026-04-17T23:40:17.759777559Z" level=info msg="StartContainer for \"8b7a98c6a0b855c336756e82d04382fbe20dc125ca42cafe99ee5707ad21a85a\" returns successfully" Apr 17 23:40:18.098679 kubelet[3186]: I0417 23:40:18.098521 3186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-67d9f5f86b-j8d8z" podStartSLOduration=50.4337273 podStartE2EDuration="54.09840695s" podCreationTimestamp="2026-04-17 23:39:24 +0000 UTC" firstStartedPulling="2026-04-17 23:40:13.954073222 +0000 UTC m=+72.335909482" lastFinishedPulling="2026-04-17 23:40:17.618752872 +0000 UTC m=+76.000589132" observedRunningTime="2026-04-17 23:40:18.098160348 +0000 UTC m=+76.479996708" watchObservedRunningTime="2026-04-17 23:40:18.09840695 +0000 UTC m=+76.480243310" Apr 17 23:40:18.284867 systemd-networkd[1356]: vxlan.calico: Gained IPv6LL Apr 17 23:40:20.075872 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount143699006.mount: Deactivated successfully. Apr 17 23:40:20.597411 containerd[1726]: time="2026-04-17T23:40:20.597353368Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:20.600401 containerd[1726]: time="2026-04-17T23:40:20.600225296Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 17 23:40:20.603719 containerd[1726]: time="2026-04-17T23:40:20.603596429Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:20.608145 containerd[1726]: time="2026-04-17T23:40:20.608087474Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:20.608970 containerd[1726]: time="2026-04-17T23:40:20.608821681Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 2.989258602s" Apr 17 23:40:20.608970 containerd[1726]: time="2026-04-17T23:40:20.608861581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 17 23:40:20.610857 containerd[1726]: time="2026-04-17T23:40:20.610797500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 17 23:40:20.616714 containerd[1726]: time="2026-04-17T23:40:20.616562557Z" level=info msg="CreateContainer within sandbox \"9defa040c174abad9415bb029101ad625b0dda37facbfbefc8873de49fefbeea\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 17 23:40:20.667679 containerd[1726]: time="2026-04-17T23:40:20.667619959Z" level=info msg="CreateContainer within sandbox \"9defa040c174abad9415bb029101ad625b0dda37facbfbefc8873de49fefbeea\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"08ea172026f918ee754e4c211898ff01720f931a539df73db615b611ba497a63\"" Apr 17 23:40:20.669828 containerd[1726]: time="2026-04-17T23:40:20.668635069Z" level=info msg="StartContainer for \"08ea172026f918ee754e4c211898ff01720f931a539df73db615b611ba497a63\"" Apr 17 23:40:20.704276 systemd[1]: run-containerd-runc-k8s.io-08ea172026f918ee754e4c211898ff01720f931a539df73db615b611ba497a63-runc.LKDepQ.mount: Deactivated successfully. Apr 17 23:40:20.710827 systemd[1]: Started cri-containerd-08ea172026f918ee754e4c211898ff01720f931a539df73db615b611ba497a63.scope - libcontainer container 08ea172026f918ee754e4c211898ff01720f931a539df73db615b611ba497a63. Apr 17 23:40:20.756069 containerd[1726]: time="2026-04-17T23:40:20.756021228Z" level=info msg="StartContainer for \"08ea172026f918ee754e4c211898ff01720f931a539df73db615b611ba497a63\" returns successfully" Apr 17 23:40:20.913025 containerd[1726]: time="2026-04-17T23:40:20.912891870Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:20.916080 containerd[1726]: time="2026-04-17T23:40:20.916023501Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 17 23:40:20.918182 containerd[1726]: time="2026-04-17T23:40:20.918148022Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 307.318822ms" Apr 17 23:40:20.918286 containerd[1726]: time="2026-04-17T23:40:20.918182322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 17 23:40:20.919461 containerd[1726]: time="2026-04-17T23:40:20.919198532Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 17 23:40:20.930376 containerd[1726]: time="2026-04-17T23:40:20.930341341Z" level=info msg="CreateContainer within sandbox \"f1dadf4913d10bc7f14a79dc4e9d9ed372a5e480828a9bd38adabbcab7d97ec3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 23:40:20.971643 containerd[1726]: time="2026-04-17T23:40:20.971584847Z" level=info msg="CreateContainer within sandbox \"f1dadf4913d10bc7f14a79dc4e9d9ed372a5e480828a9bd38adabbcab7d97ec3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8b645fd00bd38e85d796d72ea39258ea6ebc4489bceb74fef0231a9851adddc1\"" Apr 17 23:40:20.972879 containerd[1726]: time="2026-04-17T23:40:20.972375155Z" level=info msg="StartContainer for \"8b645fd00bd38e85d796d72ea39258ea6ebc4489bceb74fef0231a9851adddc1\"" Apr 17 23:40:21.003864 systemd[1]: Started cri-containerd-8b645fd00bd38e85d796d72ea39258ea6ebc4489bceb74fef0231a9851adddc1.scope - libcontainer container 8b645fd00bd38e85d796d72ea39258ea6ebc4489bceb74fef0231a9851adddc1. Apr 17 23:40:21.051902 containerd[1726]: time="2026-04-17T23:40:21.051459732Z" level=info msg="StartContainer for \"8b645fd00bd38e85d796d72ea39258ea6ebc4489bceb74fef0231a9851adddc1\" returns successfully" Apr 17 23:40:21.123688 kubelet[3186]: I0417 23:40:21.120602 3186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-lchr8" podStartSLOduration=50.488414459 podStartE2EDuration="57.120582512s" podCreationTimestamp="2026-04-17 23:39:24 +0000 UTC" firstStartedPulling="2026-04-17 23:40:13.977639737 +0000 UTC m=+72.359475997" lastFinishedPulling="2026-04-17 23:40:20.60980779 +0000 UTC m=+78.991644050" observedRunningTime="2026-04-17 23:40:21.119386 +0000 UTC m=+79.501222260" watchObservedRunningTime="2026-04-17 23:40:21.120582512 +0000 UTC m=+79.502418872" Apr 17 23:40:23.085509 kubelet[3186]: I0417 23:40:23.084640 3186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-67d9f5f86b-vmsph" podStartSLOduration=52.468371693 podStartE2EDuration="59.084546518s" podCreationTimestamp="2026-04-17 23:39:24 +0000 UTC" firstStartedPulling="2026-04-17 23:40:14.302833805 +0000 UTC m=+72.684670065" lastFinishedPulling="2026-04-17 23:40:20.91900863 +0000 UTC m=+79.300844890" observedRunningTime="2026-04-17 23:40:21.146775369 +0000 UTC m=+79.528611629" watchObservedRunningTime="2026-04-17 23:40:23.084546518 +0000 UTC m=+81.466382778" Apr 17 23:40:24.743646 containerd[1726]: time="2026-04-17T23:40:24.743587927Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:24.746465 containerd[1726]: time="2026-04-17T23:40:24.746402255Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 17 23:40:24.749928 containerd[1726]: time="2026-04-17T23:40:24.749819388Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:24.756023 containerd[1726]: time="2026-04-17T23:40:24.755631645Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:24.757677 containerd[1726]: time="2026-04-17T23:40:24.757526264Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 3.838294932s" Apr 17 23:40:24.757677 containerd[1726]: time="2026-04-17T23:40:24.757569364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 17 23:40:24.760686 containerd[1726]: time="2026-04-17T23:40:24.759771486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 17 23:40:24.792424 containerd[1726]: time="2026-04-17T23:40:24.792378407Z" level=info msg="CreateContainer within sandbox \"edf4de991258cd1491d1411d4efd2919e5892f15100b7a36e8ddd6c56f035acb\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 17 23:40:24.838726 containerd[1726]: time="2026-04-17T23:40:24.838542960Z" level=info msg="CreateContainer within sandbox \"edf4de991258cd1491d1411d4efd2919e5892f15100b7a36e8ddd6c56f035acb\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"914cfb8a657380b2564a339f3876436e1d0244637f03145b950d7787ed483b33\"" Apr 17 23:40:24.840598 containerd[1726]: time="2026-04-17T23:40:24.840560080Z" level=info msg="StartContainer for \"914cfb8a657380b2564a339f3876436e1d0244637f03145b950d7787ed483b33\"" Apr 17 23:40:24.890896 systemd[1]: Started cri-containerd-914cfb8a657380b2564a339f3876436e1d0244637f03145b950d7787ed483b33.scope - libcontainer container 914cfb8a657380b2564a339f3876436e1d0244637f03145b950d7787ed483b33. Apr 17 23:40:24.951147 containerd[1726]: time="2026-04-17T23:40:24.951090567Z" level=info msg="StartContainer for \"914cfb8a657380b2564a339f3876436e1d0244637f03145b950d7787ed483b33\" returns successfully" Apr 17 23:40:25.132848 kubelet[3186]: I0417 23:40:25.132344 3186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-759d59f7d9-2dqb9" podStartSLOduration=49.755168196 podStartE2EDuration="1m0.132320948s" podCreationTimestamp="2026-04-17 23:39:25 +0000 UTC" firstStartedPulling="2026-04-17 23:40:14.382060029 +0000 UTC m=+72.763896289" lastFinishedPulling="2026-04-17 23:40:24.759212681 +0000 UTC m=+83.141049041" observedRunningTime="2026-04-17 23:40:25.130232128 +0000 UTC m=+83.512068488" watchObservedRunningTime="2026-04-17 23:40:25.132320948 +0000 UTC m=+83.514157308" Apr 17 23:40:26.316342 containerd[1726]: time="2026-04-17T23:40:26.316283987Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:26.319267 containerd[1726]: time="2026-04-17T23:40:26.319098415Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 17 23:40:26.322992 containerd[1726]: time="2026-04-17T23:40:26.322923552Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:26.328089 containerd[1726]: time="2026-04-17T23:40:26.328036903Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:26.328931 containerd[1726]: time="2026-04-17T23:40:26.328716809Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.568911823s" Apr 17 23:40:26.328931 containerd[1726]: time="2026-04-17T23:40:26.328756310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 17 23:40:26.330442 containerd[1726]: time="2026-04-17T23:40:26.330134023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 17 23:40:26.336943 containerd[1726]: time="2026-04-17T23:40:26.336915090Z" level=info msg="CreateContainer within sandbox \"610faa707a788dddb978841173b4064aeba595938f8177bdf3f04e16fc31c147\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 17 23:40:26.373119 containerd[1726]: time="2026-04-17T23:40:26.373069545Z" level=info msg="CreateContainer within sandbox \"610faa707a788dddb978841173b4064aeba595938f8177bdf3f04e16fc31c147\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"dfe33c58e2633ac18c48eab11bc5352e366f16ad18c280545283b0636cdbe087\"" Apr 17 23:40:26.376459 containerd[1726]: time="2026-04-17T23:40:26.374971864Z" level=info msg="StartContainer for \"dfe33c58e2633ac18c48eab11bc5352e366f16ad18c280545283b0636cdbe087\"" Apr 17 23:40:26.416832 systemd[1]: Started cri-containerd-dfe33c58e2633ac18c48eab11bc5352e366f16ad18c280545283b0636cdbe087.scope - libcontainer container dfe33c58e2633ac18c48eab11bc5352e366f16ad18c280545283b0636cdbe087. Apr 17 23:40:26.456151 containerd[1726]: time="2026-04-17T23:40:26.456102862Z" level=info msg="StartContainer for \"dfe33c58e2633ac18c48eab11bc5352e366f16ad18c280545283b0636cdbe087\" returns successfully" Apr 17 23:40:27.727092 containerd[1726]: time="2026-04-17T23:40:27.727025955Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:27.733943 containerd[1726]: time="2026-04-17T23:40:27.733877723Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 17 23:40:27.738971 containerd[1726]: time="2026-04-17T23:40:27.738923972Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:27.744813 containerd[1726]: time="2026-04-17T23:40:27.744769030Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:27.745697 containerd[1726]: time="2026-04-17T23:40:27.745517237Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.415349613s" Apr 17 23:40:27.745697 containerd[1726]: time="2026-04-17T23:40:27.745560037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 17 23:40:27.747148 containerd[1726]: time="2026-04-17T23:40:27.746918551Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 17 23:40:27.753164 containerd[1726]: time="2026-04-17T23:40:27.753122212Z" level=info msg="CreateContainer within sandbox \"abae41d1874a52bbe44e8bc5a5daf382d0e71a604a6e37c464d77450b777a811\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 17 23:40:28.041961 containerd[1726]: time="2026-04-17T23:40:28.041818250Z" level=info msg="CreateContainer within sandbox \"abae41d1874a52bbe44e8bc5a5daf382d0e71a604a6e37c464d77450b777a811\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"0e08e2ae559502a8f400f5e32c66c3482abc36e2afb7aa4de9bd5e9ff6de4328\"" Apr 17 23:40:28.043434 containerd[1726]: time="2026-04-17T23:40:28.043130163Z" level=info msg="StartContainer for \"0e08e2ae559502a8f400f5e32c66c3482abc36e2afb7aa4de9bd5e9ff6de4328\"" Apr 17 23:40:28.127942 systemd[1]: Started cri-containerd-0e08e2ae559502a8f400f5e32c66c3482abc36e2afb7aa4de9bd5e9ff6de4328.scope - libcontainer container 0e08e2ae559502a8f400f5e32c66c3482abc36e2afb7aa4de9bd5e9ff6de4328. Apr 17 23:40:28.178564 containerd[1726]: time="2026-04-17T23:40:28.178417189Z" level=info msg="StartContainer for \"0e08e2ae559502a8f400f5e32c66c3482abc36e2afb7aa4de9bd5e9ff6de4328\" returns successfully" Apr 17 23:40:29.915044 containerd[1726]: time="2026-04-17T23:40:29.914990946Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:29.939045 containerd[1726]: time="2026-04-17T23:40:29.938679270Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 17 23:40:29.985412 containerd[1726]: time="2026-04-17T23:40:29.985001209Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:30.033959 containerd[1726]: time="2026-04-17T23:40:30.033905373Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:30.034867 containerd[1726]: time="2026-04-17T23:40:30.034828582Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 2.287869731s" Apr 17 23:40:30.035132 containerd[1726]: time="2026-04-17T23:40:30.034992083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 17 23:40:30.036984 containerd[1726]: time="2026-04-17T23:40:30.036887201Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 17 23:40:30.098334 containerd[1726]: time="2026-04-17T23:40:30.098050781Z" level=info msg="CreateContainer within sandbox \"610faa707a788dddb978841173b4064aeba595938f8177bdf3f04e16fc31c147\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 17 23:40:30.445099 containerd[1726]: time="2026-04-17T23:40:30.445040869Z" level=info msg="CreateContainer within sandbox \"610faa707a788dddb978841173b4064aeba595938f8177bdf3f04e16fc31c147\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a5ce2fdf5c743862251f5dcb94aa791c8b02353e0fbae1baafb94b67c81490cf\"" Apr 17 23:40:30.447618 containerd[1726]: time="2026-04-17T23:40:30.445856077Z" level=info msg="StartContainer for \"a5ce2fdf5c743862251f5dcb94aa791c8b02353e0fbae1baafb94b67c81490cf\"" Apr 17 23:40:30.487793 systemd[1]: Started cri-containerd-a5ce2fdf5c743862251f5dcb94aa791c8b02353e0fbae1baafb94b67c81490cf.scope - libcontainer container a5ce2fdf5c743862251f5dcb94aa791c8b02353e0fbae1baafb94b67c81490cf. Apr 17 23:40:30.539349 containerd[1726]: time="2026-04-17T23:40:30.539289462Z" level=info msg="StartContainer for \"a5ce2fdf5c743862251f5dcb94aa791c8b02353e0fbae1baafb94b67c81490cf\" returns successfully" Apr 17 23:40:31.407889 kubelet[3186]: I0417 23:40:31.407849 3186 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 17 23:40:31.407889 kubelet[3186]: I0417 23:40:31.407896 3186 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 17 23:40:32.552273 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount334495804.mount: Deactivated successfully. Apr 17 23:40:32.885421 containerd[1726]: time="2026-04-17T23:40:32.885269494Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:32.888456 containerd[1726]: time="2026-04-17T23:40:32.888248922Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 17 23:40:32.935144 containerd[1726]: time="2026-04-17T23:40:32.935089666Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:32.982927 containerd[1726]: time="2026-04-17T23:40:32.982742417Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:40:32.983817 containerd[1726]: time="2026-04-17T23:40:32.983776827Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 2.946837825s" Apr 17 23:40:32.983918 containerd[1726]: time="2026-04-17T23:40:32.983823628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 17 23:40:33.052172 containerd[1726]: time="2026-04-17T23:40:33.052118975Z" level=info msg="CreateContainer within sandbox \"abae41d1874a52bbe44e8bc5a5daf382d0e71a604a6e37c464d77450b777a811\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 17 23:40:33.350572 containerd[1726]: time="2026-04-17T23:40:33.350516503Z" level=info msg="CreateContainer within sandbox \"abae41d1874a52bbe44e8bc5a5daf382d0e71a604a6e37c464d77450b777a811\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"1154c3f179f43a8e96fb9815d72b665445a4c6a429130365de2a928f438fb493\"" Apr 17 23:40:33.352502 containerd[1726]: time="2026-04-17T23:40:33.351160909Z" level=info msg="StartContainer for \"1154c3f179f43a8e96fb9815d72b665445a4c6a429130365de2a928f438fb493\"" Apr 17 23:40:33.387835 systemd[1]: Started cri-containerd-1154c3f179f43a8e96fb9815d72b665445a4c6a429130365de2a928f438fb493.scope - libcontainer container 1154c3f179f43a8e96fb9815d72b665445a4c6a429130365de2a928f438fb493. Apr 17 23:40:33.433551 containerd[1726]: time="2026-04-17T23:40:33.433506889Z" level=info msg="StartContainer for \"1154c3f179f43a8e96fb9815d72b665445a4c6a429130365de2a928f438fb493\" returns successfully" Apr 17 23:40:34.171488 kubelet[3186]: I0417 23:40:34.171413 3186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-9fs9x" podStartSLOduration=53.621476667 podStartE2EDuration="1m9.171390382s" podCreationTimestamp="2026-04-17 23:39:25 +0000 UTC" firstStartedPulling="2026-04-17 23:40:14.486188479 +0000 UTC m=+72.868024739" lastFinishedPulling="2026-04-17 23:40:30.036102094 +0000 UTC m=+88.417938454" observedRunningTime="2026-04-17 23:40:31.165694098 +0000 UTC m=+89.547530358" watchObservedRunningTime="2026-04-17 23:40:34.171390382 +0000 UTC m=+92.553226642" Apr 17 23:40:44.063000 systemd[1]: run-containerd-runc-k8s.io-a73085b174a779f2571185bf1cff650d25ceca4ed6fcf5fd131094b7c0b8a498-runc.Zz0R9h.mount: Deactivated successfully. Apr 17 23:40:44.149690 kubelet[3186]: I0417 23:40:44.147282 3186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-fd85d588d-2xb7s" podStartSLOduration=11.994309153 podStartE2EDuration="30.147258156s" podCreationTimestamp="2026-04-17 23:40:14 +0000 UTC" firstStartedPulling="2026-04-17 23:40:14.831807534 +0000 UTC m=+73.213643894" lastFinishedPulling="2026-04-17 23:40:32.984756637 +0000 UTC m=+91.366592897" observedRunningTime="2026-04-17 23:40:34.17222759 +0000 UTC m=+92.554063950" watchObservedRunningTime="2026-04-17 23:40:44.147258156 +0000 UTC m=+102.529094416" Apr 17 23:40:45.316012 systemd[1]: Started sshd@7-10.0.0.22:22-20.229.252.112:35140.service - OpenSSH per-connection server daemon (20.229.252.112:35140). Apr 17 23:40:45.432491 sshd[5611]: Accepted publickey for core from 20.229.252.112 port 35140 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:40:45.434117 sshd[5611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:40:45.438977 systemd-logind[1697]: New session 10 of user core. Apr 17 23:40:45.445823 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 17 23:40:45.613624 sshd[5611]: pam_unix(sshd:session): session closed for user core Apr 17 23:40:45.617790 systemd[1]: sshd@7-10.0.0.22:22-20.229.252.112:35140.service: Deactivated successfully. Apr 17 23:40:45.620277 systemd[1]: session-10.scope: Deactivated successfully. Apr 17 23:40:45.621184 systemd-logind[1697]: Session 10 logged out. Waiting for processes to exit. Apr 17 23:40:45.622254 systemd-logind[1697]: Removed session 10. Apr 17 23:40:50.644968 systemd[1]: Started sshd@8-10.0.0.22:22-20.229.252.112:35150.service - OpenSSH per-connection server daemon (20.229.252.112:35150). Apr 17 23:40:50.762865 sshd[5624]: Accepted publickey for core from 20.229.252.112 port 35150 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:40:50.764529 sshd[5624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:40:50.771539 systemd-logind[1697]: New session 11 of user core. Apr 17 23:40:50.779448 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 17 23:40:50.936727 sshd[5624]: pam_unix(sshd:session): session closed for user core Apr 17 23:40:50.941081 systemd-logind[1697]: Session 11 logged out. Waiting for processes to exit. Apr 17 23:40:50.941813 systemd[1]: sshd@8-10.0.0.22:22-20.229.252.112:35150.service: Deactivated successfully. Apr 17 23:40:50.944192 systemd[1]: session-11.scope: Deactivated successfully. Apr 17 23:40:50.945194 systemd-logind[1697]: Removed session 11. Apr 17 23:40:52.130077 systemd[1]: run-containerd-runc-k8s.io-08ea172026f918ee754e4c211898ff01720f931a539df73db615b611ba497a63-runc.FSV2kY.mount: Deactivated successfully. Apr 17 23:40:55.964068 systemd[1]: Started sshd@9-10.0.0.22:22-20.229.252.112:52824.service - OpenSSH per-connection server daemon (20.229.252.112:52824). Apr 17 23:40:56.091066 sshd[5679]: Accepted publickey for core from 20.229.252.112 port 52824 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:40:56.092564 sshd[5679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:40:56.097352 systemd-logind[1697]: New session 12 of user core. Apr 17 23:40:56.102828 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 17 23:40:56.259461 sshd[5679]: pam_unix(sshd:session): session closed for user core Apr 17 23:40:56.262622 systemd[1]: sshd@9-10.0.0.22:22-20.229.252.112:52824.service: Deactivated successfully. Apr 17 23:40:56.264973 systemd[1]: session-12.scope: Deactivated successfully. Apr 17 23:40:56.266580 systemd-logind[1697]: Session 12 logged out. Waiting for processes to exit. Apr 17 23:40:56.268130 systemd-logind[1697]: Removed session 12. Apr 17 23:41:01.286209 systemd[1]: Started sshd@10-10.0.0.22:22-20.229.252.112:52840.service - OpenSSH per-connection server daemon (20.229.252.112:52840). Apr 17 23:41:02.244155 containerd[1726]: time="2026-04-17T23:41:01.742228345Z" level=info msg="StopPodSandbox for \"3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2\"" Apr 17 23:41:02.244155 containerd[1726]: 2026-04-17 23:41:01.785 [WARNING][5723] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-whisker--596447fcf--5kzpg-eth0" Apr 17 23:41:02.244155 containerd[1726]: 2026-04-17 23:41:01.785 [INFO][5723] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2" Apr 17 23:41:02.244155 containerd[1726]: 2026-04-17 23:41:01.785 [INFO][5723] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2" iface="eth0" netns="" Apr 17 23:41:02.244155 containerd[1726]: 2026-04-17 23:41:01.785 [INFO][5723] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2" Apr 17 23:41:02.244155 containerd[1726]: 2026-04-17 23:41:01.785 [INFO][5723] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2" Apr 17 23:41:02.244155 containerd[1726]: 2026-04-17 23:41:01.806 [INFO][5732] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2" HandleID="k8s-pod-network.3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2" Workload="ci--4081.3.6--n--b8c45c9493-k8s-whisker--596447fcf--5kzpg-eth0" Apr 17 23:41:02.244155 containerd[1726]: 2026-04-17 23:41:01.806 [INFO][5732] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:41:02.244155 containerd[1726]: 2026-04-17 23:41:01.806 [INFO][5732] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:41:02.244155 containerd[1726]: 2026-04-17 23:41:01.812 [WARNING][5732] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2" HandleID="k8s-pod-network.3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2" Workload="ci--4081.3.6--n--b8c45c9493-k8s-whisker--596447fcf--5kzpg-eth0" Apr 17 23:41:02.244155 containerd[1726]: 2026-04-17 23:41:01.812 [INFO][5732] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2" HandleID="k8s-pod-network.3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2" Workload="ci--4081.3.6--n--b8c45c9493-k8s-whisker--596447fcf--5kzpg-eth0" Apr 17 23:41:02.244155 containerd[1726]: 2026-04-17 23:41:01.813 [INFO][5732] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:41:02.244155 containerd[1726]: 2026-04-17 23:41:01.815 [INFO][5723] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2" Apr 17 23:41:02.244155 containerd[1726]: time="2026-04-17T23:41:01.816269883Z" level=info msg="TearDown network for sandbox \"3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2\" successfully" Apr 17 23:41:02.244155 containerd[1726]: time="2026-04-17T23:41:01.816306184Z" level=info msg="StopPodSandbox for \"3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2\" returns successfully" Apr 17 23:41:02.244155 containerd[1726]: time="2026-04-17T23:41:01.816902889Z" level=info msg="RemovePodSandbox for \"3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2\"" Apr 17 23:41:02.244155 containerd[1726]: time="2026-04-17T23:41:01.816928289Z" level=info msg="Forcibly stopping sandbox \"3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2\"" Apr 17 23:41:02.244155 containerd[1726]: 2026-04-17 23:41:01.852 [WARNING][5747] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2" WorkloadEndpoint="ci--4081.3.6--n--b8c45c9493-k8s-whisker--596447fcf--5kzpg-eth0" Apr 17 23:41:02.244155 containerd[1726]: 2026-04-17 23:41:01.852 [INFO][5747] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2" Apr 17 23:41:02.244155 containerd[1726]: 2026-04-17 23:41:01.852 [INFO][5747] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2" iface="eth0" netns="" Apr 17 23:41:02.244155 containerd[1726]: 2026-04-17 23:41:01.852 [INFO][5747] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2" Apr 17 23:41:02.244155 containerd[1726]: 2026-04-17 23:41:01.852 [INFO][5747] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2" Apr 17 23:41:02.244155 containerd[1726]: 2026-04-17 23:41:01.872 [INFO][5754] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2" HandleID="k8s-pod-network.3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2" Workload="ci--4081.3.6--n--b8c45c9493-k8s-whisker--596447fcf--5kzpg-eth0" Apr 17 23:41:02.244155 containerd[1726]: 2026-04-17 23:41:01.873 [INFO][5754] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:41:02.244155 containerd[1726]: 2026-04-17 23:41:01.873 [INFO][5754] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:41:02.244155 containerd[1726]: 2026-04-17 23:41:01.880 [WARNING][5754] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2" HandleID="k8s-pod-network.3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2" Workload="ci--4081.3.6--n--b8c45c9493-k8s-whisker--596447fcf--5kzpg-eth0" Apr 17 23:41:02.244155 containerd[1726]: 2026-04-17 23:41:01.880 [INFO][5754] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2" HandleID="k8s-pod-network.3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2" Workload="ci--4081.3.6--n--b8c45c9493-k8s-whisker--596447fcf--5kzpg-eth0" Apr 17 23:41:02.244155 containerd[1726]: 2026-04-17 23:41:01.882 [INFO][5754] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:41:02.244155 containerd[1726]: 2026-04-17 23:41:01.883 [INFO][5747] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2" Apr 17 23:41:02.244155 containerd[1726]: time="2026-04-17T23:41:01.885982484Z" level=info msg="TearDown network for sandbox \"3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2\" successfully" Apr 17 23:41:02.242459 sshd[5713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:41:02.246439 sshd[5713]: Accepted publickey for core from 20.229.252.112 port 52840 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:41:02.249569 systemd-logind[1697]: New session 13 of user core. Apr 17 23:41:02.254811 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 17 23:41:02.262595 containerd[1726]: time="2026-04-17T23:41:02.262555530Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:41:02.262706 containerd[1726]: time="2026-04-17T23:41:02.262675131Z" level=info msg="RemovePodSandbox \"3c12d957f9d0954e270e4e16862ee8640686ff9787362a86e5736c0065033de2\" returns successfully" Apr 17 23:41:02.263241 containerd[1726]: time="2026-04-17T23:41:02.263211536Z" level=info msg="StopPodSandbox for \"0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732\"" Apr 17 23:41:02.336677 containerd[1726]: 2026-04-17 23:41:02.298 [WARNING][5769] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--b8c45c9493-k8s-csi--node--driver--9fs9x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a52a9606-2487-4d0a-8d3d-112a3887d0ee", ResourceVersion:"1160", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-b8c45c9493", ContainerID:"610faa707a788dddb978841173b4064aeba595938f8177bdf3f04e16fc31c147", Pod:"csi-node-driver-9fs9x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.13.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic6d93488e9c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:41:02.336677 containerd[1726]: 2026-04-17 23:41:02.298 [INFO][5769] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732" Apr 17 23:41:02.336677 containerd[1726]: 2026-04-17 23:41:02.298 [INFO][5769] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732" iface="eth0" netns="" Apr 17 23:41:02.336677 containerd[1726]: 2026-04-17 23:41:02.298 [INFO][5769] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732" Apr 17 23:41:02.336677 containerd[1726]: 2026-04-17 23:41:02.298 [INFO][5769] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732" Apr 17 23:41:02.336677 containerd[1726]: 2026-04-17 23:41:02.324 [INFO][5777] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732" HandleID="k8s-pod-network.0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732" Workload="ci--4081.3.6--n--b8c45c9493-k8s-csi--node--driver--9fs9x-eth0" Apr 17 23:41:02.336677 containerd[1726]: 2026-04-17 23:41:02.324 [INFO][5777] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:41:02.336677 containerd[1726]: 2026-04-17 23:41:02.324 [INFO][5777] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:41:02.336677 containerd[1726]: 2026-04-17 23:41:02.329 [WARNING][5777] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732" HandleID="k8s-pod-network.0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732" Workload="ci--4081.3.6--n--b8c45c9493-k8s-csi--node--driver--9fs9x-eth0" Apr 17 23:41:02.336677 containerd[1726]: 2026-04-17 23:41:02.329 [INFO][5777] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732" HandleID="k8s-pod-network.0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732" Workload="ci--4081.3.6--n--b8c45c9493-k8s-csi--node--driver--9fs9x-eth0" Apr 17 23:41:02.336677 containerd[1726]: 2026-04-17 23:41:02.331 [INFO][5777] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:41:02.336677 containerd[1726]: 2026-04-17 23:41:02.333 [INFO][5769] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732" Apr 17 23:41:02.336677 containerd[1726]: time="2026-04-17T23:41:02.336515468Z" level=info msg="TearDown network for sandbox \"0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732\" successfully" Apr 17 23:41:02.336677 containerd[1726]: time="2026-04-17T23:41:02.336552168Z" level=info msg="StopPodSandbox for \"0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732\" returns successfully" Apr 17 23:41:02.338908 containerd[1726]: time="2026-04-17T23:41:02.338196083Z" level=info msg="RemovePodSandbox for \"0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732\"" Apr 17 23:41:02.338908 containerd[1726]: time="2026-04-17T23:41:02.338290383Z" level=info msg="Forcibly stopping sandbox \"0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732\"" Apr 17 23:41:02.456486 containerd[1726]: 2026-04-17 23:41:02.411 [WARNING][5798] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--b8c45c9493-k8s-csi--node--driver--9fs9x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a52a9606-2487-4d0a-8d3d-112a3887d0ee", ResourceVersion:"1160", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 39, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-b8c45c9493", ContainerID:"610faa707a788dddb978841173b4064aeba595938f8177bdf3f04e16fc31c147", Pod:"csi-node-driver-9fs9x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.13.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic6d93488e9c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:41:02.456486 containerd[1726]: 2026-04-17 23:41:02.412 [INFO][5798] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732" Apr 17 23:41:02.456486 containerd[1726]: 2026-04-17 23:41:02.412 [INFO][5798] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732" iface="eth0" netns="" Apr 17 23:41:02.456486 containerd[1726]: 2026-04-17 23:41:02.412 [INFO][5798] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732" Apr 17 23:41:02.456486 containerd[1726]: 2026-04-17 23:41:02.412 [INFO][5798] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732" Apr 17 23:41:02.456486 containerd[1726]: 2026-04-17 23:41:02.444 [INFO][5807] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732" HandleID="k8s-pod-network.0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732" Workload="ci--4081.3.6--n--b8c45c9493-k8s-csi--node--driver--9fs9x-eth0" Apr 17 23:41:02.456486 containerd[1726]: 2026-04-17 23:41:02.446 [INFO][5807] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:41:02.456486 containerd[1726]: 2026-04-17 23:41:02.446 [INFO][5807] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:41:02.456486 containerd[1726]: 2026-04-17 23:41:02.452 [WARNING][5807] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732" HandleID="k8s-pod-network.0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732" Workload="ci--4081.3.6--n--b8c45c9493-k8s-csi--node--driver--9fs9x-eth0" Apr 17 23:41:02.456486 containerd[1726]: 2026-04-17 23:41:02.452 [INFO][5807] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732" HandleID="k8s-pod-network.0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732" Workload="ci--4081.3.6--n--b8c45c9493-k8s-csi--node--driver--9fs9x-eth0" Apr 17 23:41:02.456486 containerd[1726]: 2026-04-17 23:41:02.453 [INFO][5807] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:41:02.456486 containerd[1726]: 2026-04-17 23:41:02.454 [INFO][5798] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732" Apr 17 23:41:02.456486 containerd[1726]: time="2026-04-17T23:41:02.456392901Z" level=info msg="TearDown network for sandbox \"0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732\" successfully" Apr 17 23:41:02.462532 sshd[5713]: pam_unix(sshd:session): session closed for user core Apr 17 23:41:02.466243 systemd-logind[1697]: Session 13 logged out. Waiting for processes to exit. Apr 17 23:41:02.467931 systemd[1]: sshd@10-10.0.0.22:22-20.229.252.112:52840.service: Deactivated successfully. Apr 17 23:41:02.471988 systemd[1]: session-13.scope: Deactivated successfully. Apr 17 23:41:02.474369 systemd-logind[1697]: Removed session 13. Apr 17 23:41:02.478039 containerd[1726]: time="2026-04-17T23:41:02.477995688Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:41:02.478147 containerd[1726]: time="2026-04-17T23:41:02.478092488Z" level=info msg="RemovePodSandbox \"0a0467ff709330c0ac8eaf0fa7bbd1f66be6f6277bf7678ee189f82e13db1732\" returns successfully" Apr 17 23:41:07.493315 systemd[1]: Started sshd@11-10.0.0.22:22-20.229.252.112:37636.service - OpenSSH per-connection server daemon (20.229.252.112:37636). Apr 17 23:41:07.612053 sshd[5820]: Accepted publickey for core from 20.229.252.112 port 37636 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:41:07.613582 sshd[5820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:41:07.617709 systemd-logind[1697]: New session 14 of user core. Apr 17 23:41:07.621805 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 17 23:41:07.783725 sshd[5820]: pam_unix(sshd:session): session closed for user core Apr 17 23:41:07.787102 systemd[1]: sshd@11-10.0.0.22:22-20.229.252.112:37636.service: Deactivated successfully. Apr 17 23:41:07.789430 systemd[1]: session-14.scope: Deactivated successfully. Apr 17 23:41:07.791015 systemd-logind[1697]: Session 14 logged out. Waiting for processes to exit. Apr 17 23:41:07.792673 systemd-logind[1697]: Removed session 14. Apr 17 23:41:12.815979 systemd[1]: Started sshd@12-10.0.0.22:22-20.229.252.112:37652.service - OpenSSH per-connection server daemon (20.229.252.112:37652). Apr 17 23:41:12.933220 sshd[5872]: Accepted publickey for core from 20.229.252.112 port 37652 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:41:12.934815 sshd[5872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:41:12.939967 systemd-logind[1697]: New session 15 of user core. Apr 17 23:41:12.944137 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 17 23:41:13.100113 sshd[5872]: pam_unix(sshd:session): session closed for user core Apr 17 23:41:13.104477 systemd[1]: sshd@12-10.0.0.22:22-20.229.252.112:37652.service: Deactivated successfully. Apr 17 23:41:13.106992 systemd[1]: session-15.scope: Deactivated successfully. Apr 17 23:41:13.107873 systemd-logind[1697]: Session 15 logged out. Waiting for processes to exit. Apr 17 23:41:13.108918 systemd-logind[1697]: Removed session 15. Apr 17 23:41:13.124771 systemd[1]: Started sshd@13-10.0.0.22:22-20.229.252.112:37668.service - OpenSSH per-connection server daemon (20.229.252.112:37668). Apr 17 23:41:13.248798 sshd[5885]: Accepted publickey for core from 20.229.252.112 port 37668 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:41:13.250323 sshd[5885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:41:13.255090 systemd-logind[1697]: New session 16 of user core. Apr 17 23:41:13.257844 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 17 23:41:13.450760 sshd[5885]: pam_unix(sshd:session): session closed for user core Apr 17 23:41:13.456250 systemd[1]: sshd@13-10.0.0.22:22-20.229.252.112:37668.service: Deactivated successfully. Apr 17 23:41:13.458457 systemd[1]: session-16.scope: Deactivated successfully. Apr 17 23:41:13.459311 systemd-logind[1697]: Session 16 logged out. Waiting for processes to exit. Apr 17 23:41:13.460327 systemd-logind[1697]: Removed session 16. Apr 17 23:41:13.480917 systemd[1]: Started sshd@14-10.0.0.22:22-20.229.252.112:37680.service - OpenSSH per-connection server daemon (20.229.252.112:37680). Apr 17 23:41:13.619308 sshd[5896]: Accepted publickey for core from 20.229.252.112 port 37680 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:41:13.620834 sshd[5896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:41:13.625523 systemd-logind[1697]: New session 17 of user core. Apr 17 23:41:13.628823 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 17 23:41:13.801953 sshd[5896]: pam_unix(sshd:session): session closed for user core Apr 17 23:41:13.805909 systemd[1]: sshd@14-10.0.0.22:22-20.229.252.112:37680.service: Deactivated successfully. Apr 17 23:41:13.808267 systemd[1]: session-17.scope: Deactivated successfully. Apr 17 23:41:13.809606 systemd-logind[1697]: Session 17 logged out. Waiting for processes to exit. Apr 17 23:41:13.810589 systemd-logind[1697]: Removed session 17. Apr 17 23:41:18.835969 systemd[1]: Started sshd@15-10.0.0.22:22-20.229.252.112:39380.service - OpenSSH per-connection server daemon (20.229.252.112:39380). Apr 17 23:41:18.958917 sshd[5946]: Accepted publickey for core from 20.229.252.112 port 39380 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:41:18.959523 sshd[5946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:41:18.964246 systemd-logind[1697]: New session 18 of user core. Apr 17 23:41:18.971850 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 17 23:41:19.134203 sshd[5946]: pam_unix(sshd:session): session closed for user core Apr 17 23:41:19.138374 systemd-logind[1697]: Session 18 logged out. Waiting for processes to exit. Apr 17 23:41:19.139019 systemd[1]: sshd@15-10.0.0.22:22-20.229.252.112:39380.service: Deactivated successfully. Apr 17 23:41:19.141374 systemd[1]: session-18.scope: Deactivated successfully. Apr 17 23:41:19.142482 systemd-logind[1697]: Removed session 18. Apr 17 23:41:24.168974 systemd[1]: Started sshd@16-10.0.0.22:22-20.229.252.112:39392.service - OpenSSH per-connection server daemon (20.229.252.112:39392). Apr 17 23:41:24.284690 sshd[5978]: Accepted publickey for core from 20.229.252.112 port 39392 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:41:24.285595 sshd[5978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:41:24.289643 systemd-logind[1697]: New session 19 of user core. Apr 17 23:41:24.294824 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 17 23:41:24.459208 sshd[5978]: pam_unix(sshd:session): session closed for user core Apr 17 23:41:24.463230 systemd-logind[1697]: Session 19 logged out. Waiting for processes to exit. Apr 17 23:41:24.463935 systemd[1]: sshd@16-10.0.0.22:22-20.229.252.112:39392.service: Deactivated successfully. Apr 17 23:41:24.466136 systemd[1]: session-19.scope: Deactivated successfully. Apr 17 23:41:24.467233 systemd-logind[1697]: Removed session 19. Apr 17 23:41:24.483912 systemd[1]: Started sshd@17-10.0.0.22:22-20.229.252.112:39396.service - OpenSSH per-connection server daemon (20.229.252.112:39396). Apr 17 23:41:24.609942 sshd[5991]: Accepted publickey for core from 20.229.252.112 port 39396 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:41:24.611578 sshd[5991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:41:24.616290 systemd-logind[1697]: New session 20 of user core. Apr 17 23:41:24.624862 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 17 23:41:24.839221 sshd[5991]: pam_unix(sshd:session): session closed for user core Apr 17 23:41:24.842242 systemd[1]: sshd@17-10.0.0.22:22-20.229.252.112:39396.service: Deactivated successfully. Apr 17 23:41:24.844534 systemd[1]: session-20.scope: Deactivated successfully. Apr 17 23:41:24.846528 systemd-logind[1697]: Session 20 logged out. Waiting for processes to exit. Apr 17 23:41:24.847820 systemd-logind[1697]: Removed session 20. Apr 17 23:41:24.874248 systemd[1]: Started sshd@18-10.0.0.22:22-20.229.252.112:51646.service - OpenSSH per-connection server daemon (20.229.252.112:51646). Apr 17 23:41:24.992018 sshd[6002]: Accepted publickey for core from 20.229.252.112 port 51646 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:41:24.993447 sshd[6002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:41:24.999099 systemd-logind[1697]: New session 21 of user core. Apr 17 23:41:25.006813 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 17 23:41:25.740100 sshd[6002]: pam_unix(sshd:session): session closed for user core Apr 17 23:41:25.745919 systemd-logind[1697]: Session 21 logged out. Waiting for processes to exit. Apr 17 23:41:25.750136 systemd[1]: sshd@18-10.0.0.22:22-20.229.252.112:51646.service: Deactivated successfully. Apr 17 23:41:25.755697 systemd[1]: session-21.scope: Deactivated successfully. Apr 17 23:41:25.773383 systemd-logind[1697]: Removed session 21. Apr 17 23:41:25.783772 systemd[1]: Started sshd@19-10.0.0.22:22-20.229.252.112:51658.service - OpenSSH per-connection server daemon (20.229.252.112:51658). Apr 17 23:41:25.908135 sshd[6045]: Accepted publickey for core from 20.229.252.112 port 51658 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:41:25.909607 sshd[6045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:41:25.913648 systemd-logind[1697]: New session 22 of user core. Apr 17 23:41:25.917848 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 17 23:41:26.308137 sshd[6045]: pam_unix(sshd:session): session closed for user core Apr 17 23:41:26.312860 systemd-logind[1697]: Session 22 logged out. Waiting for processes to exit. Apr 17 23:41:26.314521 systemd[1]: sshd@19-10.0.0.22:22-20.229.252.112:51658.service: Deactivated successfully. Apr 17 23:41:26.317405 systemd[1]: session-22.scope: Deactivated successfully. Apr 17 23:41:26.321493 systemd-logind[1697]: Removed session 22. Apr 17 23:41:26.331708 systemd[1]: Started sshd@20-10.0.0.22:22-20.229.252.112:51666.service - OpenSSH per-connection server daemon (20.229.252.112:51666). Apr 17 23:41:26.461063 sshd[6057]: Accepted publickey for core from 20.229.252.112 port 51666 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:41:26.462542 sshd[6057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:41:26.467491 systemd-logind[1697]: New session 23 of user core. Apr 17 23:41:26.473847 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 17 23:41:26.631229 sshd[6057]: pam_unix(sshd:session): session closed for user core Apr 17 23:41:26.634974 systemd[1]: sshd@20-10.0.0.22:22-20.229.252.112:51666.service: Deactivated successfully. Apr 17 23:41:26.637893 systemd[1]: session-23.scope: Deactivated successfully. Apr 17 23:41:26.638845 systemd-logind[1697]: Session 23 logged out. Waiting for processes to exit. Apr 17 23:41:26.640040 systemd-logind[1697]: Removed session 23. Apr 17 23:41:31.662986 systemd[1]: Started sshd@21-10.0.0.22:22-20.229.252.112:51682.service - OpenSSH per-connection server daemon (20.229.252.112:51682). Apr 17 23:41:31.780808 sshd[6070]: Accepted publickey for core from 20.229.252.112 port 51682 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:41:31.783780 sshd[6070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:41:31.788537 systemd-logind[1697]: New session 24 of user core. Apr 17 23:41:31.792813 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 17 23:41:31.956173 sshd[6070]: pam_unix(sshd:session): session closed for user core Apr 17 23:41:31.959166 systemd[1]: sshd@21-10.0.0.22:22-20.229.252.112:51682.service: Deactivated successfully. Apr 17 23:41:31.961984 systemd[1]: session-24.scope: Deactivated successfully. Apr 17 23:41:31.963592 systemd-logind[1697]: Session 24 logged out. Waiting for processes to exit. Apr 17 23:41:31.965319 systemd-logind[1697]: Removed session 24. Apr 17 23:41:36.987057 systemd[1]: Started sshd@22-10.0.0.22:22-20.229.252.112:42408.service - OpenSSH per-connection server daemon (20.229.252.112:42408). Apr 17 23:41:37.102723 sshd[6088]: Accepted publickey for core from 20.229.252.112 port 42408 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:41:37.104367 sshd[6088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:41:37.109133 systemd-logind[1697]: New session 25 of user core. Apr 17 23:41:37.112829 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 17 23:41:37.268975 sshd[6088]: pam_unix(sshd:session): session closed for user core Apr 17 23:41:37.272927 systemd[1]: sshd@22-10.0.0.22:22-20.229.252.112:42408.service: Deactivated successfully. Apr 17 23:41:37.275260 systemd[1]: session-25.scope: Deactivated successfully. Apr 17 23:41:37.276157 systemd-logind[1697]: Session 25 logged out. Waiting for processes to exit. Apr 17 23:41:37.277223 systemd-logind[1697]: Removed session 25. Apr 17 23:41:42.301241 systemd[1]: Started sshd@23-10.0.0.22:22-20.229.252.112:42412.service - OpenSSH per-connection server daemon (20.229.252.112:42412). Apr 17 23:41:42.425433 sshd[6113]: Accepted publickey for core from 20.229.252.112 port 42412 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:41:42.427070 sshd[6113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:41:42.431830 systemd-logind[1697]: New session 26 of user core. Apr 17 23:41:42.437832 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 17 23:41:42.592365 sshd[6113]: pam_unix(sshd:session): session closed for user core Apr 17 23:41:42.596400 systemd[1]: sshd@23-10.0.0.22:22-20.229.252.112:42412.service: Deactivated successfully. Apr 17 23:41:42.598859 systemd[1]: session-26.scope: Deactivated successfully. Apr 17 23:41:42.599893 systemd-logind[1697]: Session 26 logged out. Waiting for processes to exit. Apr 17 23:41:42.600848 systemd-logind[1697]: Removed session 26. Apr 17 23:41:47.626969 systemd[1]: Started sshd@24-10.0.0.22:22-20.229.252.112:56686.service - OpenSSH per-connection server daemon (20.229.252.112:56686). Apr 17 23:41:47.752333 sshd[6148]: Accepted publickey for core from 20.229.252.112 port 56686 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:41:47.756380 sshd[6148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:41:47.769609 systemd-logind[1697]: New session 27 of user core. Apr 17 23:41:47.775957 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 17 23:41:47.979984 sshd[6148]: pam_unix(sshd:session): session closed for user core Apr 17 23:41:47.983942 systemd-logind[1697]: Session 27 logged out. Waiting for processes to exit. Apr 17 23:41:47.985380 systemd[1]: sshd@24-10.0.0.22:22-20.229.252.112:56686.service: Deactivated successfully. Apr 17 23:41:47.989642 systemd[1]: session-27.scope: Deactivated successfully. Apr 17 23:41:47.992762 systemd-logind[1697]: Removed session 27. Apr 17 23:41:53.009970 systemd[1]: Started sshd@25-10.0.0.22:22-20.229.252.112:56688.service - OpenSSH per-connection server daemon (20.229.252.112:56688). Apr 17 23:41:53.126290 sshd[6198]: Accepted publickey for core from 20.229.252.112 port 56688 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:41:53.127941 sshd[6198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:41:53.133444 systemd-logind[1697]: New session 28 of user core. Apr 17 23:41:53.139811 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 17 23:41:53.294927 sshd[6198]: pam_unix(sshd:session): session closed for user core Apr 17 23:41:53.298150 systemd[1]: sshd@25-10.0.0.22:22-20.229.252.112:56688.service: Deactivated successfully. Apr 17 23:41:53.300523 systemd[1]: session-28.scope: Deactivated successfully. Apr 17 23:41:53.302117 systemd-logind[1697]: Session 28 logged out. Waiting for processes to exit. Apr 17 23:41:53.304042 systemd-logind[1697]: Removed session 28. Apr 17 23:41:58.324955 systemd[1]: Started sshd@26-10.0.0.22:22-20.229.252.112:50088.service - OpenSSH per-connection server daemon (20.229.252.112:50088). Apr 17 23:41:58.443539 sshd[6250]: Accepted publickey for core from 20.229.252.112 port 50088 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:41:58.445072 sshd[6250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:41:58.449816 systemd-logind[1697]: New session 29 of user core. Apr 17 23:41:58.453168 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 17 23:41:58.613475 sshd[6250]: pam_unix(sshd:session): session closed for user core Apr 17 23:41:58.617775 systemd-logind[1697]: Session 29 logged out. Waiting for processes to exit. Apr 17 23:41:58.618772 systemd[1]: sshd@26-10.0.0.22:22-20.229.252.112:50088.service: Deactivated successfully. Apr 17 23:41:58.621111 systemd[1]: session-29.scope: Deactivated successfully. Apr 17 23:41:58.622207 systemd-logind[1697]: Removed session 29.