Apr 17 23:44:09.109467 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Apr 17 22:11:20 -00 2026 Apr 17 23:44:09.109501 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:44:09.109520 kernel: BIOS-provided physical RAM map: Apr 17 23:44:09.109531 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 17 23:44:09.109542 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Apr 17 23:44:09.109553 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000000437dfff] usable Apr 17 23:44:09.109567 kernel: BIOS-e820: [mem 0x000000000437e000-0x000000000477dfff] reserved Apr 17 23:44:09.109579 kernel: BIOS-e820: [mem 0x000000000477e000-0x000000003ff1efff] usable Apr 17 23:44:09.109594 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ff73fff] type 20 Apr 17 23:44:09.109606 kernel: BIOS-e820: [mem 0x000000003ff74000-0x000000003ffc8fff] reserved Apr 17 23:44:09.109619 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Apr 17 23:44:09.109631 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Apr 17 23:44:09.109642 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Apr 17 23:44:09.109655 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Apr 17 23:44:09.109673 kernel: printk: bootconsole [earlyser0] enabled Apr 17 23:44:09.109687 kernel: NX (Execute Disable) protection: active Apr 17 23:44:09.109699 kernel: APIC: Static calls initialized Apr 17 23:44:09.109713 kernel: efi: EFI v2.7 by Microsoft Apr 17 23:44:09.109726 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3f41e418 Apr 17 23:44:09.109740 kernel: SMBIOS 3.1.0 present. Apr 17 23:44:09.109753 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/08/2026 Apr 17 23:44:09.109766 kernel: Hypervisor detected: Microsoft Hyper-V Apr 17 23:44:09.109779 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Apr 17 23:44:09.109793 kernel: Hyper-V: Host Build 10.0.26102.1277-1-0 Apr 17 23:44:09.109806 kernel: Hyper-V: Nested features: 0x1e0101 Apr 17 23:44:09.109821 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Apr 17 23:44:09.109833 kernel: Hyper-V: Using hypercall for remote TLB flush Apr 17 23:44:09.109845 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Apr 17 23:44:09.109859 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Apr 17 23:44:09.112914 kernel: tsc: Marking TSC unstable due to running on Hyper-V Apr 17 23:44:09.112929 kernel: tsc: Detected 2593.906 MHz processor Apr 17 23:44:09.112943 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 17 23:44:09.112956 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 17 23:44:09.112969 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Apr 17 23:44:09.112986 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 17 23:44:09.112999 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 17 23:44:09.113012 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Apr 17 23:44:09.113024 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Apr 17 23:44:09.113037 kernel: Using GB pages for direct mapping Apr 17 23:44:09.113049 kernel: Secure boot disabled Apr 17 23:44:09.113068 kernel: ACPI: Early table checksum verification disabled Apr 17 23:44:09.113084 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Apr 17 23:44:09.113098 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 17 23:44:09.113111 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 17 23:44:09.113125 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Apr 17 23:44:09.113138 kernel: ACPI: FACS 0x000000003FFFE000 000040 Apr 17 23:44:09.113151 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 17 23:44:09.113165 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 17 23:44:09.113181 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 17 23:44:09.113194 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 17 23:44:09.113207 kernel: ACPI: SRAT 0x000000003FFD4000 0001E0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 17 23:44:09.113221 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 17 23:44:09.113234 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Apr 17 23:44:09.113247 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Apr 17 23:44:09.113261 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Apr 17 23:44:09.113274 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Apr 17 23:44:09.113287 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Apr 17 23:44:09.113303 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Apr 17 23:44:09.113316 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Apr 17 23:44:09.113329 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd41df] Apr 17 23:44:09.113343 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Apr 17 23:44:09.113356 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 17 23:44:09.113369 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 17 23:44:09.113383 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Apr 17 23:44:09.113396 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Apr 17 23:44:09.113409 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Apr 17 23:44:09.113425 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Apr 17 23:44:09.113439 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Apr 17 23:44:09.113452 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Apr 17 23:44:09.113465 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Apr 17 23:44:09.113479 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Apr 17 23:44:09.113492 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Apr 17 23:44:09.113505 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Apr 17 23:44:09.113519 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Apr 17 23:44:09.113535 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Apr 17 23:44:09.113549 kernel: Zone ranges: Apr 17 23:44:09.113563 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 17 23:44:09.113576 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 17 23:44:09.113590 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Apr 17 23:44:09.113603 kernel: Movable zone start for each node Apr 17 23:44:09.113616 kernel: Early memory node ranges Apr 17 23:44:09.113629 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 17 23:44:09.113643 kernel: node 0: [mem 0x0000000000100000-0x000000000437dfff] Apr 17 23:44:09.113659 kernel: node 0: [mem 0x000000000477e000-0x000000003ff1efff] Apr 17 23:44:09.113672 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Apr 17 23:44:09.113685 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Apr 17 23:44:09.113699 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Apr 17 23:44:09.113712 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 17 23:44:09.113725 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 17 23:44:09.113738 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Apr 17 23:44:09.113751 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Apr 17 23:44:09.113765 kernel: ACPI: PM-Timer IO Port: 0x408 Apr 17 23:44:09.113781 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Apr 17 23:44:09.113794 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Apr 17 23:44:09.113807 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 17 23:44:09.113821 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 17 23:44:09.113834 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Apr 17 23:44:09.113847 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 17 23:44:09.113868 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Apr 17 23:44:09.114943 kernel: Booting paravirtualized kernel on Hyper-V Apr 17 23:44:09.114958 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 17 23:44:09.114976 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 17 23:44:09.114987 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 17 23:44:09.114996 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 17 23:44:09.115004 kernel: pcpu-alloc: [0] 0 1 Apr 17 23:44:09.115016 kernel: Hyper-V: PV spinlocks enabled Apr 17 23:44:09.115023 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 17 23:44:09.115037 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:44:09.115045 kernel: random: crng init done Apr 17 23:44:09.115059 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Apr 17 23:44:09.115067 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 17 23:44:09.115077 kernel: Fallback order for Node 0: 0 Apr 17 23:44:09.115087 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2061321 Apr 17 23:44:09.115094 kernel: Policy zone: Normal Apr 17 23:44:09.115106 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 17 23:44:09.115114 kernel: software IO TLB: area num 2. Apr 17 23:44:09.115125 kernel: Memory: 8066036K/8383228K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 316932K reserved, 0K cma-reserved) Apr 17 23:44:09.115135 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 17 23:44:09.115156 kernel: ftrace: allocating 37996 entries in 149 pages Apr 17 23:44:09.115165 kernel: ftrace: allocated 149 pages with 4 groups Apr 17 23:44:09.115177 kernel: Dynamic Preempt: voluntary Apr 17 23:44:09.115188 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 17 23:44:09.115201 kernel: rcu: RCU event tracing is enabled. Apr 17 23:44:09.115209 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 17 23:44:09.115222 kernel: Trampoline variant of Tasks RCU enabled. Apr 17 23:44:09.115230 kernel: Rude variant of Tasks RCU enabled. Apr 17 23:44:09.115242 kernel: Tracing variant of Tasks RCU enabled. Apr 17 23:44:09.115253 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 17 23:44:09.115266 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 17 23:44:09.115274 kernel: Using NULL legacy PIC Apr 17 23:44:09.115287 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Apr 17 23:44:09.115295 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 17 23:44:09.115306 kernel: Console: colour dummy device 80x25 Apr 17 23:44:09.115315 kernel: printk: console [tty1] enabled Apr 17 23:44:09.115325 kernel: printk: console [ttyS0] enabled Apr 17 23:44:09.115338 kernel: printk: bootconsole [earlyser0] disabled Apr 17 23:44:09.115347 kernel: ACPI: Core revision 20230628 Apr 17 23:44:09.115359 kernel: Failed to register legacy timer interrupt Apr 17 23:44:09.115371 kernel: APIC: Switch to symmetric I/O mode setup Apr 17 23:44:09.115380 kernel: Hyper-V: enabling crash_kexec_post_notifiers Apr 17 23:44:09.115390 kernel: Hyper-V: Using IPI hypercalls Apr 17 23:44:09.115401 kernel: APIC: send_IPI() replaced with hv_send_ipi() Apr 17 23:44:09.115409 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Apr 17 23:44:09.115417 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Apr 17 23:44:09.115432 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Apr 17 23:44:09.115441 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Apr 17 23:44:09.115453 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Apr 17 23:44:09.115461 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Apr 17 23:44:09.115474 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 17 23:44:09.115482 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 17 23:44:09.115494 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 17 23:44:09.115503 kernel: Spectre V2 : Mitigation: Retpolines Apr 17 23:44:09.115513 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 17 23:44:09.115523 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 17 23:44:09.115536 kernel: RETBleed: Vulnerable Apr 17 23:44:09.115546 kernel: Speculative Store Bypass: Vulnerable Apr 17 23:44:09.115554 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Apr 17 23:44:09.115566 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 17 23:44:09.115575 kernel: active return thunk: its_return_thunk Apr 17 23:44:09.115585 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 17 23:44:09.115595 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 17 23:44:09.115603 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 17 23:44:09.115615 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 17 23:44:09.115623 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 17 23:44:09.115638 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 17 23:44:09.115646 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 17 23:44:09.115658 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 17 23:44:09.115666 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 17 23:44:09.115677 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 17 23:44:09.115686 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 17 23:44:09.115694 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 17 23:44:09.115706 kernel: Freeing SMP alternatives memory: 32K Apr 17 23:44:09.115714 kernel: pid_max: default: 32768 minimum: 301 Apr 17 23:44:09.115727 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 17 23:44:09.115735 kernel: landlock: Up and running. Apr 17 23:44:09.115746 kernel: SELinux: Initializing. Apr 17 23:44:09.115759 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 17 23:44:09.115769 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 17 23:44:09.115778 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Apr 17 23:44:09.115790 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:44:09.115798 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:44:09.115811 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:44:09.115819 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Apr 17 23:44:09.115831 kernel: signal: max sigframe size: 3632 Apr 17 23:44:09.115840 kernel: rcu: Hierarchical SRCU implementation. Apr 17 23:44:09.115855 kernel: rcu: Max phase no-delay instances is 400. Apr 17 23:44:09.115886 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 17 23:44:09.115898 kernel: smp: Bringing up secondary CPUs ... Apr 17 23:44:09.115907 kernel: smpboot: x86: Booting SMP configuration: Apr 17 23:44:09.115917 kernel: .... node #0, CPUs: #1 Apr 17 23:44:09.115928 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Apr 17 23:44:09.115937 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 17 23:44:09.115949 kernel: smp: Brought up 1 node, 2 CPUs Apr 17 23:44:09.115957 kernel: smpboot: Max logical packages: 1 Apr 17 23:44:09.115973 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Apr 17 23:44:09.115981 kernel: devtmpfs: initialized Apr 17 23:44:09.115993 kernel: x86/mm: Memory block size: 128MB Apr 17 23:44:09.116002 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Apr 17 23:44:09.116014 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 17 23:44:09.116023 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 17 23:44:09.116034 kernel: pinctrl core: initialized pinctrl subsystem Apr 17 23:44:09.116043 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 17 23:44:09.116052 kernel: audit: initializing netlink subsys (disabled) Apr 17 23:44:09.116066 kernel: audit: type=2000 audit(1776469447.029:1): state=initialized audit_enabled=0 res=1 Apr 17 23:44:09.116074 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 17 23:44:09.116087 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 17 23:44:09.116095 kernel: cpuidle: using governor menu Apr 17 23:44:09.116107 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 17 23:44:09.116115 kernel: dca service started, version 1.12.1 Apr 17 23:44:09.116126 kernel: e820: reserve RAM buffer [mem 0x0437e000-0x07ffffff] Apr 17 23:44:09.116136 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Apr 17 23:44:09.116148 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 17 23:44:09.116159 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 17 23:44:09.116172 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 17 23:44:09.116180 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 17 23:44:09.116193 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 17 23:44:09.116201 kernel: ACPI: Added _OSI(Module Device) Apr 17 23:44:09.116212 kernel: ACPI: Added _OSI(Processor Device) Apr 17 23:44:09.116222 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 17 23:44:09.116230 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 17 23:44:09.116245 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 17 23:44:09.116253 kernel: ACPI: Interpreter enabled Apr 17 23:44:09.116265 kernel: ACPI: PM: (supports S0 S5) Apr 17 23:44:09.116273 kernel: ACPI: Using IOAPIC for interrupt routing Apr 17 23:44:09.116285 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 17 23:44:09.116294 kernel: PCI: Ignoring E820 reservations for host bridge windows Apr 17 23:44:09.116304 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Apr 17 23:44:09.116314 kernel: iommu: Default domain type: Translated Apr 17 23:44:09.116322 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 17 23:44:09.116335 kernel: efivars: Registered efivars operations Apr 17 23:44:09.116346 kernel: PCI: Using ACPI for IRQ routing Apr 17 23:44:09.116358 kernel: PCI: System does not support PCI Apr 17 23:44:09.116365 kernel: vgaarb: loaded Apr 17 23:44:09.116378 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Apr 17 23:44:09.116386 kernel: VFS: Disk quotas dquot_6.6.0 Apr 17 23:44:09.116397 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 17 23:44:09.116406 kernel: pnp: PnP ACPI init Apr 17 23:44:09.116416 kernel: pnp: PnP ACPI: found 3 devices Apr 17 23:44:09.116427 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 17 23:44:09.116438 kernel: NET: Registered PF_INET protocol family Apr 17 23:44:09.116450 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 17 23:44:09.116459 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Apr 17 23:44:09.116471 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 17 23:44:09.116480 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 17 23:44:09.116490 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Apr 17 23:44:09.116500 kernel: TCP: Hash tables configured (established 65536 bind 65536) Apr 17 23:44:09.116509 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 17 23:44:09.116521 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 17 23:44:09.116536 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 17 23:44:09.116544 kernel: NET: Registered PF_XDP protocol family Apr 17 23:44:09.116557 kernel: PCI: CLS 0 bytes, default 64 Apr 17 23:44:09.116565 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 17 23:44:09.116577 kernel: software IO TLB: mapped [mem 0x000000003a878000-0x000000003e878000] (64MB) Apr 17 23:44:09.116586 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 17 23:44:09.116596 kernel: Initialise system trusted keyrings Apr 17 23:44:09.116606 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Apr 17 23:44:09.116619 kernel: Key type asymmetric registered Apr 17 23:44:09.116629 kernel: Asymmetric key parser 'x509' registered Apr 17 23:44:09.116637 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 17 23:44:09.116648 kernel: io scheduler mq-deadline registered Apr 17 23:44:09.116658 kernel: io scheduler kyber registered Apr 17 23:44:09.116666 kernel: io scheduler bfq registered Apr 17 23:44:09.116678 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 17 23:44:09.116686 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 17 23:44:09.116699 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 17 23:44:09.116707 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Apr 17 23:44:09.116722 kernel: i8042: PNP: No PS/2 controller found. Apr 17 23:44:09.116891 kernel: rtc_cmos 00:02: registered as rtc0 Apr 17 23:44:09.117006 kernel: rtc_cmos 00:02: setting system clock to 2026-04-17T23:44:08 UTC (1776469448) Apr 17 23:44:09.117112 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Apr 17 23:44:09.117129 kernel: intel_pstate: CPU model not supported Apr 17 23:44:09.117150 kernel: efifb: probing for efifb Apr 17 23:44:09.117170 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Apr 17 23:44:09.117197 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Apr 17 23:44:09.117214 kernel: efifb: scrolling: redraw Apr 17 23:44:09.117230 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 17 23:44:09.117249 kernel: Console: switching to colour frame buffer device 128x48 Apr 17 23:44:09.117265 kernel: fb0: EFI VGA frame buffer device Apr 17 23:44:09.117280 kernel: pstore: Using crash dump compression: deflate Apr 17 23:44:09.117296 kernel: pstore: Registered efi_pstore as persistent store backend Apr 17 23:44:09.117312 kernel: NET: Registered PF_INET6 protocol family Apr 17 23:44:09.117330 kernel: Segment Routing with IPv6 Apr 17 23:44:09.117355 kernel: In-situ OAM (IOAM) with IPv6 Apr 17 23:44:09.117372 kernel: NET: Registered PF_PACKET protocol family Apr 17 23:44:09.117390 kernel: Key type dns_resolver registered Apr 17 23:44:09.117409 kernel: IPI shorthand broadcast: enabled Apr 17 23:44:09.117427 kernel: sched_clock: Marking stable (870003200, 47578100)->(1134062400, -216481100) Apr 17 23:44:09.117443 kernel: registered taskstats version 1 Apr 17 23:44:09.117459 kernel: Loading compiled-in X.509 certificates Apr 17 23:44:09.117475 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 39e9969c7f49062f0fc1d1fb72e8f874436eb94f' Apr 17 23:44:09.117492 kernel: Key type .fscrypt registered Apr 17 23:44:09.117512 kernel: Key type fscrypt-provisioning registered Apr 17 23:44:09.117527 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 17 23:44:09.117543 kernel: ima: Allocated hash algorithm: sha1 Apr 17 23:44:09.117559 kernel: ima: No architecture policies found Apr 17 23:44:09.117575 kernel: clk: Disabling unused clocks Apr 17 23:44:09.117595 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 17 23:44:09.117613 kernel: Write protecting the kernel read-only data: 36864k Apr 17 23:44:09.117630 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 17 23:44:09.117646 kernel: Run /init as init process Apr 17 23:44:09.117670 kernel: with arguments: Apr 17 23:44:09.117685 kernel: /init Apr 17 23:44:09.117700 kernel: with environment: Apr 17 23:44:09.117718 kernel: HOME=/ Apr 17 23:44:09.117733 kernel: TERM=linux Apr 17 23:44:09.117755 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:44:09.117776 systemd[1]: Detected virtualization microsoft. Apr 17 23:44:09.117794 systemd[1]: Detected architecture x86-64. Apr 17 23:44:09.117818 systemd[1]: Running in initrd. Apr 17 23:44:09.117834 systemd[1]: No hostname configured, using default hostname. Apr 17 23:44:09.117850 systemd[1]: Hostname set to . Apr 17 23:44:09.120650 systemd[1]: Initializing machine ID from random generator. Apr 17 23:44:09.120669 systemd[1]: Queued start job for default target initrd.target. Apr 17 23:44:09.120685 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:44:09.120706 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:44:09.120722 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 17 23:44:09.120742 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:44:09.120758 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 17 23:44:09.120773 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 17 23:44:09.120791 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 17 23:44:09.120806 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 17 23:44:09.120821 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:44:09.120837 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:44:09.120854 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:44:09.120928 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:44:09.120944 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:44:09.120959 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:44:09.120974 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:44:09.120989 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:44:09.121004 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 23:44:09.121020 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 17 23:44:09.121034 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:44:09.121053 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:44:09.121068 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:44:09.121083 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:44:09.121098 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 17 23:44:09.121113 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:44:09.121128 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 17 23:44:09.121144 systemd[1]: Starting systemd-fsck-usr.service... Apr 17 23:44:09.121159 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:44:09.121177 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:44:09.121191 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:44:09.121232 systemd-journald[177]: Collecting audit messages is disabled. Apr 17 23:44:09.121265 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 17 23:44:09.121284 systemd-journald[177]: Journal started Apr 17 23:44:09.121314 systemd-journald[177]: Runtime Journal (/run/log/journal/8474bcf99115403db21f2831728b1e8d) is 8.0M, max 158.7M, 150.7M free. Apr 17 23:44:09.129890 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:44:09.128733 systemd-modules-load[178]: Inserted module 'overlay' Apr 17 23:44:09.132645 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:44:09.136297 systemd[1]: Finished systemd-fsck-usr.service. Apr 17 23:44:09.154815 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:44:09.169070 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:44:09.177978 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:44:09.190310 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 17 23:44:09.193027 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:44:09.208338 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:44:09.211120 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:44:09.228529 systemd-modules-load[178]: Inserted module 'br_netfilter' Apr 17 23:44:09.228880 kernel: Bridge firewalling registered Apr 17 23:44:09.233318 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:44:09.240794 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:44:09.245134 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:44:09.256036 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:44:09.267011 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 17 23:44:09.279064 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:44:09.282133 dracut-cmdline[209]: dracut-dracut-053 Apr 17 23:44:09.287177 dracut-cmdline[209]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:44:09.311072 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:44:09.323019 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:44:09.371514 systemd-resolved[259]: Positive Trust Anchors: Apr 17 23:44:09.374595 systemd-resolved[259]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:44:09.380303 kernel: SCSI subsystem initialized Apr 17 23:44:09.380406 systemd-resolved[259]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:44:09.402082 systemd-resolved[259]: Defaulting to hostname 'linux'. Apr 17 23:44:09.406000 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:44:09.418978 kernel: Loading iSCSI transport class v2.0-870. Apr 17 23:44:09.415377 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:44:09.429880 kernel: iscsi: registered transport (tcp) Apr 17 23:44:09.451635 kernel: iscsi: registered transport (qla4xxx) Apr 17 23:44:09.451701 kernel: QLogic iSCSI HBA Driver Apr 17 23:44:09.489165 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 17 23:44:09.502121 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 17 23:44:09.530288 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 17 23:44:09.530379 kernel: device-mapper: uevent: version 1.0.3 Apr 17 23:44:09.533847 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 17 23:44:09.573885 kernel: raid6: avx512x4 gen() 18367 MB/s Apr 17 23:44:09.593886 kernel: raid6: avx512x2 gen() 18194 MB/s Apr 17 23:44:09.612872 kernel: raid6: avx512x1 gen() 18045 MB/s Apr 17 23:44:09.631872 kernel: raid6: avx2x4 gen() 18133 MB/s Apr 17 23:44:09.651876 kernel: raid6: avx2x2 gen() 18188 MB/s Apr 17 23:44:09.672248 kernel: raid6: avx2x1 gen() 13955 MB/s Apr 17 23:44:09.672279 kernel: raid6: using algorithm avx512x4 gen() 18367 MB/s Apr 17 23:44:09.696918 kernel: raid6: .... xor() 7699 MB/s, rmw enabled Apr 17 23:44:09.696949 kernel: raid6: using avx512x2 recovery algorithm Apr 17 23:44:09.719894 kernel: xor: automatically using best checksumming function avx Apr 17 23:44:09.867890 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 17 23:44:09.877951 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:44:09.887138 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:44:09.900517 systemd-udevd[399]: Using default interface naming scheme 'v255'. Apr 17 23:44:09.905128 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:44:09.922020 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 17 23:44:09.934801 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Apr 17 23:44:09.963425 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:44:09.973034 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:44:10.017388 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:44:10.030101 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 17 23:44:10.064128 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 17 23:44:10.073104 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:44:10.077531 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:44:10.081084 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:44:10.099061 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 17 23:44:10.119588 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:44:10.136948 kernel: cryptd: max_cpu_qlen set to 1000 Apr 17 23:44:10.136987 kernel: AVX2 version of gcm_enc/dec engaged. Apr 17 23:44:10.139569 kernel: AES CTR mode by8 optimization enabled Apr 17 23:44:10.161240 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:44:10.161501 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:44:10.183729 kernel: hv_vmbus: Vmbus version:5.2 Apr 17 23:44:10.183764 kernel: hv_vmbus: registering driver hyperv_keyboard Apr 17 23:44:10.165744 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:44:10.183730 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:44:10.198891 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 17 23:44:10.183990 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:44:10.190893 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:44:10.208614 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:44:10.441461 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 17 23:44:10.441485 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 17 23:44:10.441497 kernel: PTP clock support registered Apr 17 23:44:10.441512 kernel: hv_utils: Registering HyperV Utility Driver Apr 17 23:44:10.441523 kernel: hv_vmbus: registering driver hv_utils Apr 17 23:44:10.441533 kernel: hv_utils: Heartbeat IC version 3.0 Apr 17 23:44:10.441548 kernel: hv_utils: Shutdown IC version 3.2 Apr 17 23:44:10.441558 kernel: hv_utils: TimeSync IC version 4.0 Apr 17 23:44:10.441569 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Apr 17 23:44:10.441587 kernel: hv_vmbus: registering driver hv_netvsc Apr 17 23:44:10.441598 kernel: hv_vmbus: registering driver hid_hyperv Apr 17 23:44:10.441609 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Apr 17 23:44:10.441623 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Apr 17 23:44:10.441802 kernel: hv_vmbus: registering driver hv_storvsc Apr 17 23:44:10.441818 kernel: scsi host1: storvsc_host_t Apr 17 23:44:10.441953 kernel: scsi host0: storvsc_host_t Apr 17 23:44:10.442082 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Apr 17 23:44:10.442216 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Apr 17 23:44:10.354044 systemd-resolved[259]: Clock change detected. Flushing caches. Apr 17 23:44:10.448468 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:44:10.461493 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Apr 17 23:44:10.461772 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 17 23:44:10.464955 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:44:10.475734 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Apr 17 23:44:10.494554 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Apr 17 23:44:10.494849 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Apr 17 23:44:10.500097 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 17 23:44:10.500311 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 17 23:44:10.505937 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Apr 17 23:44:10.506247 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Apr 17 23:44:10.512977 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:44:10.520901 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:44:10.520928 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 17 23:44:10.539737 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#31 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 17 23:44:10.539930 kernel: hv_netvsc 000d3ab4-def0-000d-3ab4-def0000d3ab4 eth0: VF slot 1 added Apr 17 23:44:10.552724 kernel: hv_vmbus: registering driver hv_pci Apr 17 23:44:10.557726 kernel: hv_pci 3bda4c03-5cb2-4eb3-9b94-c36cb9990e6c: PCI VMBus probing: Using version 0x10004 Apr 17 23:44:10.564734 kernel: hv_pci 3bda4c03-5cb2-4eb3-9b94-c36cb9990e6c: PCI host bridge to bus 5cb2:00 Apr 17 23:44:10.564914 kernel: pci_bus 5cb2:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Apr 17 23:44:10.570205 kernel: pci_bus 5cb2:00: No busn resource found for root bus, will use [bus 00-ff] Apr 17 23:44:10.576181 kernel: pci 5cb2:00:02.0: [15b3:1016] type 00 class 0x020000 Apr 17 23:44:10.580745 kernel: pci 5cb2:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Apr 17 23:44:10.584818 kernel: pci 5cb2:00:02.0: enabling Extended Tags Apr 17 23:44:10.598786 kernel: pci 5cb2:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 5cb2:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Apr 17 23:44:10.605440 kernel: pci_bus 5cb2:00: busn_res: [bus 00-ff] end is updated to 00 Apr 17 23:44:10.605759 kernel: pci 5cb2:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Apr 17 23:44:10.771957 kernel: mlx5_core 5cb2:00:02.0: enabling device (0000 -> 0002) Apr 17 23:44:10.777733 kernel: mlx5_core 5cb2:00:02.0: firmware version: 14.30.5026 Apr 17 23:44:10.990969 kernel: hv_netvsc 000d3ab4-def0-000d-3ab4-def0000d3ab4 eth0: VF registering: eth1 Apr 17 23:44:10.991328 kernel: mlx5_core 5cb2:00:02.0 eth1: joined to eth0 Apr 17 23:44:10.995857 kernel: mlx5_core 5cb2:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Apr 17 23:44:11.006757 kernel: mlx5_core 5cb2:00:02.0 enP23730s1: renamed from eth1 Apr 17 23:44:11.112624 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Apr 17 23:44:11.125725 kernel: BTRFS: device fsid 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 devid 1 transid 32 /dev/sda3 scanned by (udev-worker) (446) Apr 17 23:44:11.135738 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (461) Apr 17 23:44:11.158210 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Apr 17 23:44:11.158415 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Apr 17 23:44:11.164967 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 17 23:44:11.173947 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 17 23:44:11.199231 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Apr 17 23:44:12.209764 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:44:12.210438 disk-uuid[603]: The operation has completed successfully. Apr 17 23:44:12.328040 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 17 23:44:12.328159 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 17 23:44:12.354936 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 17 23:44:12.362607 sh[692]: Success Apr 17 23:44:12.394233 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 17 23:44:12.635006 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 17 23:44:12.647846 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 17 23:44:12.652758 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 17 23:44:12.673515 kernel: BTRFS info (device dm-0): first mount of filesystem 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 Apr 17 23:44:12.673566 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:44:12.677126 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 17 23:44:12.680088 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 17 23:44:12.682820 kernel: BTRFS info (device dm-0): using free space tree Apr 17 23:44:12.974928 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 17 23:44:12.981043 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 17 23:44:12.990880 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 17 23:44:12.998012 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 17 23:44:13.015085 kernel: BTRFS info (device sda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:44:13.015150 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:44:13.020942 kernel: BTRFS info (device sda6): using free space tree Apr 17 23:44:13.070730 kernel: BTRFS info (device sda6): auto enabling async discard Apr 17 23:44:13.086137 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 17 23:44:13.090681 kernel: BTRFS info (device sda6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:44:13.092942 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:44:13.102040 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:44:13.113493 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 17 23:44:13.126907 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 17 23:44:13.137721 systemd-networkd[873]: lo: Link UP Apr 17 23:44:13.137740 systemd-networkd[873]: lo: Gained carrier Apr 17 23:44:13.143865 systemd-networkd[873]: Enumeration completed Apr 17 23:44:13.144236 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:44:13.144764 systemd-networkd[873]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:44:13.144767 systemd-networkd[873]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:44:13.149336 systemd[1]: Reached target network.target - Network. Apr 17 23:44:13.215730 kernel: mlx5_core 5cb2:00:02.0 enP23730s1: Link up Apr 17 23:44:13.251783 kernel: hv_netvsc 000d3ab4-def0-000d-3ab4-def0000d3ab4 eth0: Data path switched to VF: enP23730s1 Apr 17 23:44:13.252045 systemd-networkd[873]: enP23730s1: Link UP Apr 17 23:44:13.252187 systemd-networkd[873]: eth0: Link UP Apr 17 23:44:13.252527 systemd-networkd[873]: eth0: Gained carrier Apr 17 23:44:13.252541 systemd-networkd[873]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:44:13.265760 systemd-networkd[873]: enP23730s1: Gained carrier Apr 17 23:44:13.292749 systemd-networkd[873]: eth0: DHCPv4 address 10.0.0.19/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 17 23:44:14.254495 ignition[876]: Ignition 2.19.0 Apr 17 23:44:14.254509 ignition[876]: Stage: fetch-offline Apr 17 23:44:14.254561 ignition[876]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:44:14.254572 ignition[876]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 17 23:44:14.254693 ignition[876]: parsed url from cmdline: "" Apr 17 23:44:14.254698 ignition[876]: no config URL provided Apr 17 23:44:14.254727 ignition[876]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:44:14.254740 ignition[876]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:44:14.254747 ignition[876]: failed to fetch config: resource requires networking Apr 17 23:44:14.254945 ignition[876]: Ignition finished successfully Apr 17 23:44:14.276450 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:44:14.290843 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 17 23:44:14.309992 ignition[884]: Ignition 2.19.0 Apr 17 23:44:14.310005 ignition[884]: Stage: fetch Apr 17 23:44:14.310220 ignition[884]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:44:14.310234 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 17 23:44:14.310340 ignition[884]: parsed url from cmdline: "" Apr 17 23:44:14.310343 ignition[884]: no config URL provided Apr 17 23:44:14.310348 ignition[884]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:44:14.310355 ignition[884]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:44:14.310373 ignition[884]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Apr 17 23:44:14.402599 ignition[884]: GET result: OK Apr 17 23:44:14.402714 ignition[884]: config has been read from IMDS userdata Apr 17 23:44:14.402748 ignition[884]: parsing config with SHA512: 3d24c8a15991d8ba8912d6570b3c74b4c56d1711abc6e4f571b43db84df88c875f326626d00846acf99849cf83c599f189e3f08c6e69aad213e67896cb1ca4d6 Apr 17 23:44:14.410583 unknown[884]: fetched base config from "system" Apr 17 23:44:14.410600 unknown[884]: fetched base config from "system" Apr 17 23:44:14.411253 ignition[884]: fetch: fetch complete Apr 17 23:44:14.410612 unknown[884]: fetched user config from "azure" Apr 17 23:44:14.411259 ignition[884]: fetch: fetch passed Apr 17 23:44:14.413977 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 17 23:44:14.411314 ignition[884]: Ignition finished successfully Apr 17 23:44:14.430804 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 17 23:44:14.447762 ignition[890]: Ignition 2.19.0 Apr 17 23:44:14.447775 ignition[890]: Stage: kargs Apr 17 23:44:14.447996 ignition[890]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:44:14.448008 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 17 23:44:14.449327 ignition[890]: kargs: kargs passed Apr 17 23:44:14.454855 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 17 23:44:14.449379 ignition[890]: Ignition finished successfully Apr 17 23:44:14.476938 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 17 23:44:14.493240 ignition[896]: Ignition 2.19.0 Apr 17 23:44:14.493253 ignition[896]: Stage: disks Apr 17 23:44:14.493471 ignition[896]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:44:14.500389 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 17 23:44:14.493484 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 17 23:44:14.504068 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 17 23:44:14.494761 ignition[896]: disks: disks passed Apr 17 23:44:14.494805 ignition[896]: Ignition finished successfully Apr 17 23:44:14.519390 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 23:44:14.519496 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:44:14.520416 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:44:14.520891 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:44:14.539961 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 17 23:44:14.601938 systemd-fsck[904]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Apr 17 23:44:14.606264 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 17 23:44:14.615866 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 17 23:44:14.711731 kernel: EXT4-fs (sda9): mounted filesystem d3c199f8-8065-4f33-a75b-da2f09d4fc39 r/w with ordered data mode. Quota mode: none. Apr 17 23:44:14.711900 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 17 23:44:14.712636 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 17 23:44:14.745850 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:44:14.760725 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (915) Apr 17 23:44:14.768879 kernel: BTRFS info (device sda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:44:14.768954 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:44:14.772736 kernel: BTRFS info (device sda6): using free space tree Apr 17 23:44:14.776866 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 17 23:44:14.785954 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 17 23:44:14.790517 kernel: BTRFS info (device sda6): auto enabling async discard Apr 17 23:44:14.795646 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 17 23:44:14.795694 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:44:14.807293 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:44:14.814545 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 17 23:44:14.825904 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 17 23:44:15.296918 systemd-networkd[873]: eth0: Gained IPv6LL Apr 17 23:44:15.557825 coreos-metadata[930]: Apr 17 23:44:15.557 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 17 23:44:15.564040 coreos-metadata[930]: Apr 17 23:44:15.563 INFO Fetch successful Apr 17 23:44:15.567163 coreos-metadata[930]: Apr 17 23:44:15.566 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Apr 17 23:44:15.578210 coreos-metadata[930]: Apr 17 23:44:15.578 INFO Fetch successful Apr 17 23:44:15.597727 coreos-metadata[930]: Apr 17 23:44:15.597 INFO wrote hostname ci-4081.3.6-n-7251cc3c8a to /sysroot/etc/hostname Apr 17 23:44:15.600594 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 17 23:44:15.735111 initrd-setup-root[945]: cut: /sysroot/etc/passwd: No such file or directory Apr 17 23:44:15.775538 initrd-setup-root[952]: cut: /sysroot/etc/group: No such file or directory Apr 17 23:44:15.781167 initrd-setup-root[959]: cut: /sysroot/etc/shadow: No such file or directory Apr 17 23:44:15.788236 initrd-setup-root[966]: cut: /sysroot/etc/gshadow: No such file or directory Apr 17 23:44:16.798674 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 17 23:44:16.806937 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 17 23:44:16.816909 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 17 23:44:16.824721 kernel: BTRFS info (device sda6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:44:16.828538 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 17 23:44:16.858446 ignition[1033]: INFO : Ignition 2.19.0 Apr 17 23:44:16.861115 ignition[1033]: INFO : Stage: mount Apr 17 23:44:16.861115 ignition[1033]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:44:16.861115 ignition[1033]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 17 23:44:16.861115 ignition[1033]: INFO : mount: mount passed Apr 17 23:44:16.861115 ignition[1033]: INFO : Ignition finished successfully Apr 17 23:44:16.861578 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 17 23:44:16.875350 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 17 23:44:16.892850 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 17 23:44:16.904844 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:44:16.925722 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1045) Apr 17 23:44:16.930722 kernel: BTRFS info (device sda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:44:16.930755 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:44:16.936775 kernel: BTRFS info (device sda6): using free space tree Apr 17 23:44:16.943723 kernel: BTRFS info (device sda6): auto enabling async discard Apr 17 23:44:16.945377 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:44:16.974731 ignition[1061]: INFO : Ignition 2.19.0 Apr 17 23:44:16.974731 ignition[1061]: INFO : Stage: files Apr 17 23:44:16.974731 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:44:16.974731 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 17 23:44:16.985671 ignition[1061]: DEBUG : files: compiled without relabeling support, skipping Apr 17 23:44:17.004143 ignition[1061]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 17 23:44:17.004143 ignition[1061]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 17 23:44:17.129258 ignition[1061]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 17 23:44:17.133796 ignition[1061]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 17 23:44:17.133796 ignition[1061]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 17 23:44:17.133796 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:44:17.133796 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 17 23:44:17.129740 unknown[1061]: wrote ssh authorized keys file for user: core Apr 17 23:44:17.233651 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 17 23:44:17.358009 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:44:17.364340 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 17 23:44:17.364340 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 17 23:44:17.364340 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:44:17.364340 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:44:17.364340 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:44:17.364340 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:44:17.364340 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:44:17.364340 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:44:17.364340 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:44:17.364340 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:44:17.364340 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 17 23:44:17.364340 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 17 23:44:17.364340 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 17 23:44:17.364340 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 17 23:44:17.741933 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 17 23:44:19.261810 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 17 23:44:19.261810 ignition[1061]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 17 23:44:19.277856 ignition[1061]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:44:19.283016 ignition[1061]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:44:19.283016 ignition[1061]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 17 23:44:19.291863 ignition[1061]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 17 23:44:19.291863 ignition[1061]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 17 23:44:19.291863 ignition[1061]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:44:19.291863 ignition[1061]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:44:19.291863 ignition[1061]: INFO : files: files passed Apr 17 23:44:19.291863 ignition[1061]: INFO : Ignition finished successfully Apr 17 23:44:19.299364 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 17 23:44:19.320961 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 17 23:44:19.325875 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 17 23:44:19.337130 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 17 23:44:19.337248 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 17 23:44:19.353448 initrd-setup-root-after-ignition[1091]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:44:19.353448 initrd-setup-root-after-ignition[1091]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:44:19.361948 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:44:19.360616 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:44:19.362560 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 17 23:44:19.379310 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 17 23:44:19.410661 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 17 23:44:19.410803 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 17 23:44:19.418258 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 17 23:44:19.423735 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 17 23:44:19.429378 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 17 23:44:19.437880 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 17 23:44:19.450839 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:44:19.459933 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 17 23:44:19.471721 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:44:19.471939 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:44:19.472653 systemd[1]: Stopped target timers.target - Timer Units. Apr 17 23:44:19.473471 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 17 23:44:19.473619 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:44:19.474320 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 17 23:44:19.474897 systemd[1]: Stopped target basic.target - Basic System. Apr 17 23:44:19.475556 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 17 23:44:19.476125 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:44:19.476623 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 17 23:44:19.477192 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 17 23:44:19.477682 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:44:19.478129 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 17 23:44:19.478545 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 17 23:44:19.479547 systemd[1]: Stopped target swap.target - Swaps. Apr 17 23:44:19.480550 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 17 23:44:19.480692 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:44:19.481947 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:44:19.482401 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:44:19.482796 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 17 23:44:19.520533 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:44:19.524194 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 17 23:44:19.524370 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 17 23:44:19.603139 ignition[1115]: INFO : Ignition 2.19.0 Apr 17 23:44:19.603139 ignition[1115]: INFO : Stage: umount Apr 17 23:44:19.603139 ignition[1115]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:44:19.603139 ignition[1115]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 17 23:44:19.603139 ignition[1115]: INFO : umount: umount passed Apr 17 23:44:19.603139 ignition[1115]: INFO : Ignition finished successfully Apr 17 23:44:19.528592 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 17 23:44:19.528723 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:44:19.529010 systemd[1]: ignition-files.service: Deactivated successfully. Apr 17 23:44:19.529106 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 17 23:44:19.529430 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 17 23:44:19.529527 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 17 23:44:19.576004 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 17 23:44:19.604994 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 17 23:44:19.608014 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 17 23:44:19.608224 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:44:19.619574 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 17 23:44:19.626747 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:44:19.635006 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 17 23:44:19.635116 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 17 23:44:19.641694 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 17 23:44:19.642241 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 17 23:44:19.642347 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 17 23:44:19.654774 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 17 23:44:19.654830 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 17 23:44:19.658804 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 17 23:44:19.658864 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 17 23:44:19.664040 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 17 23:44:19.664097 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 17 23:44:19.669292 systemd[1]: Stopped target network.target - Network. Apr 17 23:44:19.674405 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 17 23:44:19.674454 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:44:19.677731 systemd[1]: Stopped target paths.target - Path Units. Apr 17 23:44:19.685336 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 17 23:44:19.694100 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:44:19.702385 systemd[1]: Stopped target slices.target - Slice Units. Apr 17 23:44:19.741960 systemd[1]: Stopped target sockets.target - Socket Units. Apr 17 23:44:19.747026 systemd[1]: iscsid.socket: Deactivated successfully. Apr 17 23:44:19.747092 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:44:19.754214 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 17 23:44:19.754273 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:44:19.759080 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 17 23:44:19.759144 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 17 23:44:19.759259 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 17 23:44:19.759302 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 17 23:44:19.777333 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 17 23:44:19.783052 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 17 23:44:19.789303 systemd-networkd[873]: eth0: DHCPv6 lease lost Apr 17 23:44:19.791625 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 17 23:44:19.791734 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 17 23:44:19.798133 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 17 23:44:19.798321 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 17 23:44:19.812071 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 17 23:44:19.812135 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:44:19.823805 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 17 23:44:19.826537 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 17 23:44:19.826600 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:44:19.832508 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 23:44:19.835095 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:44:19.837897 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 17 23:44:19.837946 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 17 23:44:19.838048 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 17 23:44:19.838086 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:44:19.863432 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:44:19.887345 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 17 23:44:19.887524 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:44:19.894076 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 17 23:44:19.894126 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 17 23:44:19.902654 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 17 23:44:19.902701 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:44:19.903149 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 17 23:44:19.924102 kernel: hv_netvsc 000d3ab4-def0-000d-3ab4-def0000d3ab4 eth0: Data path switched from VF: enP23730s1 Apr 17 23:44:19.903195 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:44:19.910590 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 17 23:44:19.910648 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 17 23:44:19.939652 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:44:19.939759 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:44:19.952972 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 17 23:44:19.956306 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 17 23:44:19.956391 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:44:19.963051 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 17 23:44:19.965825 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:44:19.969462 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 17 23:44:19.969504 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:44:19.972599 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:44:19.972648 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:44:19.976412 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 17 23:44:19.976504 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 17 23:44:19.981814 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 17 23:44:19.981897 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 17 23:44:20.540595 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 17 23:44:20.540772 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 17 23:44:20.546993 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 17 23:44:20.555545 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 17 23:44:20.555623 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 17 23:44:20.571939 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 17 23:44:20.584533 systemd[1]: Switching root. Apr 17 23:44:20.657844 systemd-journald[177]: Journal stopped Apr 17 23:44:09.109467 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Apr 17 22:11:20 -00 2026 Apr 17 23:44:09.109501 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:44:09.109520 kernel: BIOS-provided physical RAM map: Apr 17 23:44:09.109531 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 17 23:44:09.109542 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Apr 17 23:44:09.109553 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000000437dfff] usable Apr 17 23:44:09.109567 kernel: BIOS-e820: [mem 0x000000000437e000-0x000000000477dfff] reserved Apr 17 23:44:09.109579 kernel: BIOS-e820: [mem 0x000000000477e000-0x000000003ff1efff] usable Apr 17 23:44:09.109594 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ff73fff] type 20 Apr 17 23:44:09.109606 kernel: BIOS-e820: [mem 0x000000003ff74000-0x000000003ffc8fff] reserved Apr 17 23:44:09.109619 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Apr 17 23:44:09.109631 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Apr 17 23:44:09.109642 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Apr 17 23:44:09.109655 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Apr 17 23:44:09.109673 kernel: printk: bootconsole [earlyser0] enabled Apr 17 23:44:09.109687 kernel: NX (Execute Disable) protection: active Apr 17 23:44:09.109699 kernel: APIC: Static calls initialized Apr 17 23:44:09.109713 kernel: efi: EFI v2.7 by Microsoft Apr 17 23:44:09.109726 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3f41e418 Apr 17 23:44:09.109740 kernel: SMBIOS 3.1.0 present. Apr 17 23:44:09.109753 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/08/2026 Apr 17 23:44:09.109766 kernel: Hypervisor detected: Microsoft Hyper-V Apr 17 23:44:09.109779 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Apr 17 23:44:09.109793 kernel: Hyper-V: Host Build 10.0.26102.1277-1-0 Apr 17 23:44:09.109806 kernel: Hyper-V: Nested features: 0x1e0101 Apr 17 23:44:09.109821 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Apr 17 23:44:09.109833 kernel: Hyper-V: Using hypercall for remote TLB flush Apr 17 23:44:09.109845 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Apr 17 23:44:09.109859 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Apr 17 23:44:09.112914 kernel: tsc: Marking TSC unstable due to running on Hyper-V Apr 17 23:44:09.112929 kernel: tsc: Detected 2593.906 MHz processor Apr 17 23:44:09.112943 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 17 23:44:09.112956 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 17 23:44:09.112969 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Apr 17 23:44:09.112986 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 17 23:44:09.112999 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 17 23:44:09.113012 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Apr 17 23:44:09.113024 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Apr 17 23:44:09.113037 kernel: Using GB pages for direct mapping Apr 17 23:44:09.113049 kernel: Secure boot disabled Apr 17 23:44:09.113068 kernel: ACPI: Early table checksum verification disabled Apr 17 23:44:09.113084 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Apr 17 23:44:09.113098 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 17 23:44:09.113111 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 17 23:44:09.113125 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Apr 17 23:44:09.113138 kernel: ACPI: FACS 0x000000003FFFE000 000040 Apr 17 23:44:09.113151 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 17 23:44:09.113165 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 17 23:44:09.113181 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 17 23:44:09.113194 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 17 23:44:09.113207 kernel: ACPI: SRAT 0x000000003FFD4000 0001E0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 17 23:44:09.113221 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 17 23:44:09.113234 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Apr 17 23:44:09.113247 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Apr 17 23:44:09.113261 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Apr 17 23:44:09.113274 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Apr 17 23:44:09.113287 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Apr 17 23:44:09.113303 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Apr 17 23:44:09.113316 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Apr 17 23:44:09.113329 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd41df] Apr 17 23:44:09.113343 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Apr 17 23:44:09.113356 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 17 23:44:09.113369 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 17 23:44:09.113383 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Apr 17 23:44:09.113396 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Apr 17 23:44:09.113409 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Apr 17 23:44:09.113425 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Apr 17 23:44:09.113439 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Apr 17 23:44:09.113452 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Apr 17 23:44:09.113465 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Apr 17 23:44:09.113479 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Apr 17 23:44:09.113492 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Apr 17 23:44:09.113505 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Apr 17 23:44:09.113519 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Apr 17 23:44:09.113535 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Apr 17 23:44:09.113549 kernel: Zone ranges: Apr 17 23:44:09.113563 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 17 23:44:09.113576 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 17 23:44:09.113590 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Apr 17 23:44:09.113603 kernel: Movable zone start for each node Apr 17 23:44:09.113616 kernel: Early memory node ranges Apr 17 23:44:09.113629 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 17 23:44:09.113643 kernel: node 0: [mem 0x0000000000100000-0x000000000437dfff] Apr 17 23:44:09.113659 kernel: node 0: [mem 0x000000000477e000-0x000000003ff1efff] Apr 17 23:44:09.113672 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Apr 17 23:44:09.113685 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Apr 17 23:44:09.113699 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Apr 17 23:44:09.113712 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 17 23:44:09.113725 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 17 23:44:09.113738 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Apr 17 23:44:09.113751 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Apr 17 23:44:09.113765 kernel: ACPI: PM-Timer IO Port: 0x408 Apr 17 23:44:09.113781 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Apr 17 23:44:09.113794 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Apr 17 23:44:09.113807 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 17 23:44:09.113821 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 17 23:44:09.113834 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Apr 17 23:44:09.113847 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 17 23:44:09.113868 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Apr 17 23:44:09.114943 kernel: Booting paravirtualized kernel on Hyper-V Apr 17 23:44:09.114958 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 17 23:44:09.114976 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 17 23:44:09.114987 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 17 23:44:09.114996 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 17 23:44:09.115004 kernel: pcpu-alloc: [0] 0 1 Apr 17 23:44:09.115016 kernel: Hyper-V: PV spinlocks enabled Apr 17 23:44:09.115023 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 17 23:44:09.115037 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:44:09.115045 kernel: random: crng init done Apr 17 23:44:09.115059 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Apr 17 23:44:09.115067 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 17 23:44:09.115077 kernel: Fallback order for Node 0: 0 Apr 17 23:44:09.115087 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2061321 Apr 17 23:44:09.115094 kernel: Policy zone: Normal Apr 17 23:44:09.115106 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 17 23:44:09.115114 kernel: software IO TLB: area num 2. Apr 17 23:44:09.115125 kernel: Memory: 8066036K/8383228K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 316932K reserved, 0K cma-reserved) Apr 17 23:44:09.115135 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 17 23:44:09.115156 kernel: ftrace: allocating 37996 entries in 149 pages Apr 17 23:44:09.115165 kernel: ftrace: allocated 149 pages with 4 groups Apr 17 23:44:09.115177 kernel: Dynamic Preempt: voluntary Apr 17 23:44:09.115188 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 17 23:44:09.115201 kernel: rcu: RCU event tracing is enabled. Apr 17 23:44:09.115209 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 17 23:44:09.115222 kernel: Trampoline variant of Tasks RCU enabled. Apr 17 23:44:09.115230 kernel: Rude variant of Tasks RCU enabled. Apr 17 23:44:09.115242 kernel: Tracing variant of Tasks RCU enabled. Apr 17 23:44:09.115253 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 17 23:44:09.115266 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 17 23:44:09.115274 kernel: Using NULL legacy PIC Apr 17 23:44:09.115287 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Apr 17 23:44:09.115295 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 17 23:44:09.115306 kernel: Console: colour dummy device 80x25 Apr 17 23:44:09.115315 kernel: printk: console [tty1] enabled Apr 17 23:44:09.115325 kernel: printk: console [ttyS0] enabled Apr 17 23:44:09.115338 kernel: printk: bootconsole [earlyser0] disabled Apr 17 23:44:09.115347 kernel: ACPI: Core revision 20230628 Apr 17 23:44:09.115359 kernel: Failed to register legacy timer interrupt Apr 17 23:44:09.115371 kernel: APIC: Switch to symmetric I/O mode setup Apr 17 23:44:09.115380 kernel: Hyper-V: enabling crash_kexec_post_notifiers Apr 17 23:44:09.115390 kernel: Hyper-V: Using IPI hypercalls Apr 17 23:44:09.115401 kernel: APIC: send_IPI() replaced with hv_send_ipi() Apr 17 23:44:09.115409 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Apr 17 23:44:09.115417 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Apr 17 23:44:09.115432 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Apr 17 23:44:09.115441 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Apr 17 23:44:09.115453 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Apr 17 23:44:09.115461 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Apr 17 23:44:09.115474 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 17 23:44:09.115482 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 17 23:44:09.115494 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 17 23:44:09.115503 kernel: Spectre V2 : Mitigation: Retpolines Apr 17 23:44:09.115513 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 17 23:44:09.115523 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 17 23:44:09.115536 kernel: RETBleed: Vulnerable Apr 17 23:44:09.115546 kernel: Speculative Store Bypass: Vulnerable Apr 17 23:44:09.115554 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Apr 17 23:44:09.115566 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 17 23:44:09.115575 kernel: active return thunk: its_return_thunk Apr 17 23:44:09.115585 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 17 23:44:09.115595 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 17 23:44:09.115603 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 17 23:44:09.115615 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 17 23:44:09.115623 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 17 23:44:09.115638 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 17 23:44:09.115646 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 17 23:44:09.115658 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 17 23:44:09.115666 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 17 23:44:09.115677 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 17 23:44:09.115686 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 17 23:44:09.115694 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 17 23:44:09.115706 kernel: Freeing SMP alternatives memory: 32K Apr 17 23:44:09.115714 kernel: pid_max: default: 32768 minimum: 301 Apr 17 23:44:09.115727 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 17 23:44:09.115735 kernel: landlock: Up and running. Apr 17 23:44:09.115746 kernel: SELinux: Initializing. Apr 17 23:44:09.115759 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 17 23:44:09.115769 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 17 23:44:09.115778 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Apr 17 23:44:09.115790 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:44:09.115798 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:44:09.115811 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:44:09.115819 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Apr 17 23:44:09.115831 kernel: signal: max sigframe size: 3632 Apr 17 23:44:09.115840 kernel: rcu: Hierarchical SRCU implementation. Apr 17 23:44:09.115855 kernel: rcu: Max phase no-delay instances is 400. Apr 17 23:44:09.115886 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 17 23:44:09.115898 kernel: smp: Bringing up secondary CPUs ... Apr 17 23:44:09.115907 kernel: smpboot: x86: Booting SMP configuration: Apr 17 23:44:09.115917 kernel: .... node #0, CPUs: #1 Apr 17 23:44:09.115928 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Apr 17 23:44:09.115937 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 17 23:44:09.115949 kernel: smp: Brought up 1 node, 2 CPUs Apr 17 23:44:09.115957 kernel: smpboot: Max logical packages: 1 Apr 17 23:44:09.115973 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Apr 17 23:44:09.115981 kernel: devtmpfs: initialized Apr 17 23:44:09.115993 kernel: x86/mm: Memory block size: 128MB Apr 17 23:44:09.116002 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Apr 17 23:44:09.116014 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 17 23:44:09.116023 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 17 23:44:09.116034 kernel: pinctrl core: initialized pinctrl subsystem Apr 17 23:44:09.116043 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 17 23:44:09.116052 kernel: audit: initializing netlink subsys (disabled) Apr 17 23:44:09.116066 kernel: audit: type=2000 audit(1776469447.029:1): state=initialized audit_enabled=0 res=1 Apr 17 23:44:09.116074 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 17 23:44:09.116087 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 17 23:44:09.116095 kernel: cpuidle: using governor menu Apr 17 23:44:09.116107 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 17 23:44:09.116115 kernel: dca service started, version 1.12.1 Apr 17 23:44:09.116126 kernel: e820: reserve RAM buffer [mem 0x0437e000-0x07ffffff] Apr 17 23:44:09.116136 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Apr 17 23:44:09.116148 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 17 23:44:09.116159 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 17 23:44:09.116172 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 17 23:44:09.116180 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 17 23:44:09.116193 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 17 23:44:09.116201 kernel: ACPI: Added _OSI(Module Device) Apr 17 23:44:09.116212 kernel: ACPI: Added _OSI(Processor Device) Apr 17 23:44:09.116222 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 17 23:44:09.116230 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 17 23:44:09.116245 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 17 23:44:09.116253 kernel: ACPI: Interpreter enabled Apr 17 23:44:09.116265 kernel: ACPI: PM: (supports S0 S5) Apr 17 23:44:09.116273 kernel: ACPI: Using IOAPIC for interrupt routing Apr 17 23:44:09.116285 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 17 23:44:09.116294 kernel: PCI: Ignoring E820 reservations for host bridge windows Apr 17 23:44:09.116304 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Apr 17 23:44:09.116314 kernel: iommu: Default domain type: Translated Apr 17 23:44:09.116322 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 17 23:44:09.116335 kernel: efivars: Registered efivars operations Apr 17 23:44:09.116346 kernel: PCI: Using ACPI for IRQ routing Apr 17 23:44:09.116358 kernel: PCI: System does not support PCI Apr 17 23:44:09.116365 kernel: vgaarb: loaded Apr 17 23:44:09.116378 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Apr 17 23:44:09.116386 kernel: VFS: Disk quotas dquot_6.6.0 Apr 17 23:44:09.116397 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 17 23:44:09.116406 kernel: pnp: PnP ACPI init Apr 17 23:44:09.116416 kernel: pnp: PnP ACPI: found 3 devices Apr 17 23:44:09.116427 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 17 23:44:09.116438 kernel: NET: Registered PF_INET protocol family Apr 17 23:44:09.116450 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 17 23:44:09.116459 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Apr 17 23:44:09.116471 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 17 23:44:09.116480 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 17 23:44:09.116490 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Apr 17 23:44:09.116500 kernel: TCP: Hash tables configured (established 65536 bind 65536) Apr 17 23:44:09.116509 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 17 23:44:09.116521 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 17 23:44:09.116536 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 17 23:44:09.116544 kernel: NET: Registered PF_XDP protocol family Apr 17 23:44:09.116557 kernel: PCI: CLS 0 bytes, default 64 Apr 17 23:44:09.116565 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 17 23:44:09.116577 kernel: software IO TLB: mapped [mem 0x000000003a878000-0x000000003e878000] (64MB) Apr 17 23:44:09.116586 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 17 23:44:09.116596 kernel: Initialise system trusted keyrings Apr 17 23:44:09.116606 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Apr 17 23:44:09.116619 kernel: Key type asymmetric registered Apr 17 23:44:09.116629 kernel: Asymmetric key parser 'x509' registered Apr 17 23:44:09.116637 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 17 23:44:09.116648 kernel: io scheduler mq-deadline registered Apr 17 23:44:09.116658 kernel: io scheduler kyber registered Apr 17 23:44:09.116666 kernel: io scheduler bfq registered Apr 17 23:44:09.116678 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 17 23:44:09.116686 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 17 23:44:09.116699 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 17 23:44:09.116707 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Apr 17 23:44:09.116722 kernel: i8042: PNP: No PS/2 controller found. Apr 17 23:44:09.116891 kernel: rtc_cmos 00:02: registered as rtc0 Apr 17 23:44:09.117006 kernel: rtc_cmos 00:02: setting system clock to 2026-04-17T23:44:08 UTC (1776469448) Apr 17 23:44:09.117112 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Apr 17 23:44:09.117129 kernel: intel_pstate: CPU model not supported Apr 17 23:44:09.117150 kernel: efifb: probing for efifb Apr 17 23:44:09.117170 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Apr 17 23:44:09.117197 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Apr 17 23:44:09.117214 kernel: efifb: scrolling: redraw Apr 17 23:44:09.117230 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 17 23:44:09.117249 kernel: Console: switching to colour frame buffer device 128x48 Apr 17 23:44:09.117265 kernel: fb0: EFI VGA frame buffer device Apr 17 23:44:09.117280 kernel: pstore: Using crash dump compression: deflate Apr 17 23:44:09.117296 kernel: pstore: Registered efi_pstore as persistent store backend Apr 17 23:44:09.117312 kernel: NET: Registered PF_INET6 protocol family Apr 17 23:44:09.117330 kernel: Segment Routing with IPv6 Apr 17 23:44:09.117355 kernel: In-situ OAM (IOAM) with IPv6 Apr 17 23:44:09.117372 kernel: NET: Registered PF_PACKET protocol family Apr 17 23:44:09.117390 kernel: Key type dns_resolver registered Apr 17 23:44:09.117409 kernel: IPI shorthand broadcast: enabled Apr 17 23:44:09.117427 kernel: sched_clock: Marking stable (870003200, 47578100)->(1134062400, -216481100) Apr 17 23:44:09.117443 kernel: registered taskstats version 1 Apr 17 23:44:09.117459 kernel: Loading compiled-in X.509 certificates Apr 17 23:44:09.117475 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 39e9969c7f49062f0fc1d1fb72e8f874436eb94f' Apr 17 23:44:09.117492 kernel: Key type .fscrypt registered Apr 17 23:44:09.117512 kernel: Key type fscrypt-provisioning registered Apr 17 23:44:09.117527 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 17 23:44:09.117543 kernel: ima: Allocated hash algorithm: sha1 Apr 17 23:44:09.117559 kernel: ima: No architecture policies found Apr 17 23:44:09.117575 kernel: clk: Disabling unused clocks Apr 17 23:44:09.117595 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 17 23:44:09.117613 kernel: Write protecting the kernel read-only data: 36864k Apr 17 23:44:09.117630 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 17 23:44:09.117646 kernel: Run /init as init process Apr 17 23:44:09.117670 kernel: with arguments: Apr 17 23:44:09.117685 kernel: /init Apr 17 23:44:09.117700 kernel: with environment: Apr 17 23:44:09.117718 kernel: HOME=/ Apr 17 23:44:09.117733 kernel: TERM=linux Apr 17 23:44:09.117755 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:44:09.117776 systemd[1]: Detected virtualization microsoft. Apr 17 23:44:09.117794 systemd[1]: Detected architecture x86-64. Apr 17 23:44:09.117818 systemd[1]: Running in initrd. Apr 17 23:44:09.117834 systemd[1]: No hostname configured, using default hostname. Apr 17 23:44:09.117850 systemd[1]: Hostname set to . Apr 17 23:44:09.120650 systemd[1]: Initializing machine ID from random generator. Apr 17 23:44:09.120669 systemd[1]: Queued start job for default target initrd.target. Apr 17 23:44:09.120685 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:44:09.120706 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:44:09.120722 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 17 23:44:09.120742 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:44:09.120758 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 17 23:44:09.120773 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 17 23:44:09.120791 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 17 23:44:09.120806 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 17 23:44:09.120821 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:44:09.120837 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:44:09.120854 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:44:09.120928 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:44:09.120944 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:44:09.120959 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:44:09.120974 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:44:09.120989 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:44:09.121004 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 23:44:09.121020 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 17 23:44:09.121034 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:44:09.121053 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:44:09.121068 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:44:09.121083 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:44:09.121098 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 17 23:44:09.121113 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:44:09.121128 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 17 23:44:09.121144 systemd[1]: Starting systemd-fsck-usr.service... Apr 17 23:44:09.121159 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:44:09.121177 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:44:09.121191 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:44:09.121232 systemd-journald[177]: Collecting audit messages is disabled. Apr 17 23:44:09.121265 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 17 23:44:09.121284 systemd-journald[177]: Journal started Apr 17 23:44:09.121314 systemd-journald[177]: Runtime Journal (/run/log/journal/8474bcf99115403db21f2831728b1e8d) is 8.0M, max 158.7M, 150.7M free. Apr 17 23:44:09.129890 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:44:09.128733 systemd-modules-load[178]: Inserted module 'overlay' Apr 17 23:44:09.132645 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:44:09.136297 systemd[1]: Finished systemd-fsck-usr.service. Apr 17 23:44:09.154815 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:44:09.169070 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:44:09.177978 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:44:09.190310 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 17 23:44:09.193027 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:44:09.208338 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:44:09.211120 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:44:09.228529 systemd-modules-load[178]: Inserted module 'br_netfilter' Apr 17 23:44:09.228880 kernel: Bridge firewalling registered Apr 17 23:44:09.233318 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:44:09.240794 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:44:09.245134 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:44:09.256036 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:44:09.267011 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 17 23:44:09.279064 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:44:09.282133 dracut-cmdline[209]: dracut-dracut-053 Apr 17 23:44:09.287177 dracut-cmdline[209]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:44:09.311072 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:44:09.323019 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:44:09.371514 systemd-resolved[259]: Positive Trust Anchors: Apr 17 23:44:09.374595 systemd-resolved[259]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:44:09.380303 kernel: SCSI subsystem initialized Apr 17 23:44:09.380406 systemd-resolved[259]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:44:09.402082 systemd-resolved[259]: Defaulting to hostname 'linux'. Apr 17 23:44:09.406000 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:44:09.418978 kernel: Loading iSCSI transport class v2.0-870. Apr 17 23:44:09.415377 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:44:09.429880 kernel: iscsi: registered transport (tcp) Apr 17 23:44:09.451635 kernel: iscsi: registered transport (qla4xxx) Apr 17 23:44:09.451701 kernel: QLogic iSCSI HBA Driver Apr 17 23:44:09.489165 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 17 23:44:09.502121 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 17 23:44:09.530288 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 17 23:44:09.530379 kernel: device-mapper: uevent: version 1.0.3 Apr 17 23:44:09.533847 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 17 23:44:09.573885 kernel: raid6: avx512x4 gen() 18367 MB/s Apr 17 23:44:09.593886 kernel: raid6: avx512x2 gen() 18194 MB/s Apr 17 23:44:09.612872 kernel: raid6: avx512x1 gen() 18045 MB/s Apr 17 23:44:09.631872 kernel: raid6: avx2x4 gen() 18133 MB/s Apr 17 23:44:09.651876 kernel: raid6: avx2x2 gen() 18188 MB/s Apr 17 23:44:09.672248 kernel: raid6: avx2x1 gen() 13955 MB/s Apr 17 23:44:09.672279 kernel: raid6: using algorithm avx512x4 gen() 18367 MB/s Apr 17 23:44:09.696918 kernel: raid6: .... xor() 7699 MB/s, rmw enabled Apr 17 23:44:09.696949 kernel: raid6: using avx512x2 recovery algorithm Apr 17 23:44:09.719894 kernel: xor: automatically using best checksumming function avx Apr 17 23:44:09.867890 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 17 23:44:09.877951 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:44:09.887138 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:44:09.900517 systemd-udevd[399]: Using default interface naming scheme 'v255'. Apr 17 23:44:09.905128 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:44:09.922020 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 17 23:44:09.934801 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Apr 17 23:44:09.963425 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:44:09.973034 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:44:10.017388 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:44:10.030101 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 17 23:44:10.064128 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 17 23:44:10.073104 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:44:10.077531 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:44:10.081084 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:44:10.099061 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 17 23:44:10.119588 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:44:10.136948 kernel: cryptd: max_cpu_qlen set to 1000 Apr 17 23:44:10.136987 kernel: AVX2 version of gcm_enc/dec engaged. Apr 17 23:44:10.139569 kernel: AES CTR mode by8 optimization enabled Apr 17 23:44:10.161240 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:44:10.161501 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:44:10.183729 kernel: hv_vmbus: Vmbus version:5.2 Apr 17 23:44:10.183764 kernel: hv_vmbus: registering driver hyperv_keyboard Apr 17 23:44:10.165744 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:44:10.183730 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:44:10.198891 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 17 23:44:10.183990 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:44:10.190893 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:44:10.208614 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:44:10.441461 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 17 23:44:10.441485 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 17 23:44:10.441497 kernel: PTP clock support registered Apr 17 23:44:10.441512 kernel: hv_utils: Registering HyperV Utility Driver Apr 17 23:44:10.441523 kernel: hv_vmbus: registering driver hv_utils Apr 17 23:44:10.441533 kernel: hv_utils: Heartbeat IC version 3.0 Apr 17 23:44:10.441548 kernel: hv_utils: Shutdown IC version 3.2 Apr 17 23:44:10.441558 kernel: hv_utils: TimeSync IC version 4.0 Apr 17 23:44:10.441569 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Apr 17 23:44:10.441587 kernel: hv_vmbus: registering driver hv_netvsc Apr 17 23:44:10.441598 kernel: hv_vmbus: registering driver hid_hyperv Apr 17 23:44:10.441609 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Apr 17 23:44:10.441623 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Apr 17 23:44:10.441802 kernel: hv_vmbus: registering driver hv_storvsc Apr 17 23:44:10.441818 kernel: scsi host1: storvsc_host_t Apr 17 23:44:10.441953 kernel: scsi host0: storvsc_host_t Apr 17 23:44:10.442082 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Apr 17 23:44:10.442216 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Apr 17 23:44:10.354044 systemd-resolved[259]: Clock change detected. Flushing caches. Apr 17 23:44:10.448468 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:44:10.461493 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Apr 17 23:44:10.461772 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 17 23:44:10.464955 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:44:10.475734 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Apr 17 23:44:10.494554 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Apr 17 23:44:10.494849 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Apr 17 23:44:10.500097 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 17 23:44:10.500311 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 17 23:44:10.505937 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Apr 17 23:44:10.506247 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Apr 17 23:44:10.512977 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:44:10.520901 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:44:10.520928 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 17 23:44:10.539737 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#31 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 17 23:44:10.539930 kernel: hv_netvsc 000d3ab4-def0-000d-3ab4-def0000d3ab4 eth0: VF slot 1 added Apr 17 23:44:10.552724 kernel: hv_vmbus: registering driver hv_pci Apr 17 23:44:10.557726 kernel: hv_pci 3bda4c03-5cb2-4eb3-9b94-c36cb9990e6c: PCI VMBus probing: Using version 0x10004 Apr 17 23:44:10.564734 kernel: hv_pci 3bda4c03-5cb2-4eb3-9b94-c36cb9990e6c: PCI host bridge to bus 5cb2:00 Apr 17 23:44:10.564914 kernel: pci_bus 5cb2:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Apr 17 23:44:10.570205 kernel: pci_bus 5cb2:00: No busn resource found for root bus, will use [bus 00-ff] Apr 17 23:44:10.576181 kernel: pci 5cb2:00:02.0: [15b3:1016] type 00 class 0x020000 Apr 17 23:44:10.580745 kernel: pci 5cb2:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Apr 17 23:44:10.584818 kernel: pci 5cb2:00:02.0: enabling Extended Tags Apr 17 23:44:10.598786 kernel: pci 5cb2:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 5cb2:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Apr 17 23:44:10.605440 kernel: pci_bus 5cb2:00: busn_res: [bus 00-ff] end is updated to 00 Apr 17 23:44:10.605759 kernel: pci 5cb2:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Apr 17 23:44:10.771957 kernel: mlx5_core 5cb2:00:02.0: enabling device (0000 -> 0002) Apr 17 23:44:10.777733 kernel: mlx5_core 5cb2:00:02.0: firmware version: 14.30.5026 Apr 17 23:44:10.990969 kernel: hv_netvsc 000d3ab4-def0-000d-3ab4-def0000d3ab4 eth0: VF registering: eth1 Apr 17 23:44:10.991328 kernel: mlx5_core 5cb2:00:02.0 eth1: joined to eth0 Apr 17 23:44:10.995857 kernel: mlx5_core 5cb2:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Apr 17 23:44:11.006757 kernel: mlx5_core 5cb2:00:02.0 enP23730s1: renamed from eth1 Apr 17 23:44:11.112624 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Apr 17 23:44:11.125725 kernel: BTRFS: device fsid 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 devid 1 transid 32 /dev/sda3 scanned by (udev-worker) (446) Apr 17 23:44:11.135738 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (461) Apr 17 23:44:11.158210 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Apr 17 23:44:11.158415 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Apr 17 23:44:11.164967 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 17 23:44:11.173947 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 17 23:44:11.199231 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Apr 17 23:44:12.209764 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:44:12.210438 disk-uuid[603]: The operation has completed successfully. Apr 17 23:44:12.328040 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 17 23:44:12.328159 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 17 23:44:12.354936 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 17 23:44:12.362607 sh[692]: Success Apr 17 23:44:12.394233 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 17 23:44:12.635006 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 17 23:44:12.647846 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 17 23:44:12.652758 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 17 23:44:12.673515 kernel: BTRFS info (device dm-0): first mount of filesystem 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 Apr 17 23:44:12.673566 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:44:12.677126 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 17 23:44:12.680088 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 17 23:44:12.682820 kernel: BTRFS info (device dm-0): using free space tree Apr 17 23:44:12.974928 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 17 23:44:12.981043 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 17 23:44:12.990880 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 17 23:44:12.998012 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 17 23:44:13.015085 kernel: BTRFS info (device sda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:44:13.015150 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:44:13.020942 kernel: BTRFS info (device sda6): using free space tree Apr 17 23:44:13.070730 kernel: BTRFS info (device sda6): auto enabling async discard Apr 17 23:44:13.086137 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 17 23:44:13.090681 kernel: BTRFS info (device sda6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:44:13.092942 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:44:13.102040 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:44:13.113493 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 17 23:44:13.126907 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 17 23:44:13.137721 systemd-networkd[873]: lo: Link UP Apr 17 23:44:13.137740 systemd-networkd[873]: lo: Gained carrier Apr 17 23:44:13.143865 systemd-networkd[873]: Enumeration completed Apr 17 23:44:13.144236 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:44:13.144764 systemd-networkd[873]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:44:13.144767 systemd-networkd[873]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:44:13.149336 systemd[1]: Reached target network.target - Network. Apr 17 23:44:13.215730 kernel: mlx5_core 5cb2:00:02.0 enP23730s1: Link up Apr 17 23:44:13.251783 kernel: hv_netvsc 000d3ab4-def0-000d-3ab4-def0000d3ab4 eth0: Data path switched to VF: enP23730s1 Apr 17 23:44:13.252045 systemd-networkd[873]: enP23730s1: Link UP Apr 17 23:44:13.252187 systemd-networkd[873]: eth0: Link UP Apr 17 23:44:13.252527 systemd-networkd[873]: eth0: Gained carrier Apr 17 23:44:13.252541 systemd-networkd[873]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:44:13.265760 systemd-networkd[873]: enP23730s1: Gained carrier Apr 17 23:44:13.292749 systemd-networkd[873]: eth0: DHCPv4 address 10.0.0.19/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 17 23:44:14.254495 ignition[876]: Ignition 2.19.0 Apr 17 23:44:14.254509 ignition[876]: Stage: fetch-offline Apr 17 23:44:14.254561 ignition[876]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:44:14.254572 ignition[876]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 17 23:44:14.254693 ignition[876]: parsed url from cmdline: "" Apr 17 23:44:14.254698 ignition[876]: no config URL provided Apr 17 23:44:14.254727 ignition[876]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:44:14.254740 ignition[876]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:44:14.254747 ignition[876]: failed to fetch config: resource requires networking Apr 17 23:44:14.254945 ignition[876]: Ignition finished successfully Apr 17 23:44:14.276450 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:44:14.290843 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 17 23:44:14.309992 ignition[884]: Ignition 2.19.0 Apr 17 23:44:14.310005 ignition[884]: Stage: fetch Apr 17 23:44:14.310220 ignition[884]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:44:14.310234 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 17 23:44:14.310340 ignition[884]: parsed url from cmdline: "" Apr 17 23:44:14.310343 ignition[884]: no config URL provided Apr 17 23:44:14.310348 ignition[884]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:44:14.310355 ignition[884]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:44:14.310373 ignition[884]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Apr 17 23:44:14.402599 ignition[884]: GET result: OK Apr 17 23:44:14.402714 ignition[884]: config has been read from IMDS userdata Apr 17 23:44:14.402748 ignition[884]: parsing config with SHA512: 3d24c8a15991d8ba8912d6570b3c74b4c56d1711abc6e4f571b43db84df88c875f326626d00846acf99849cf83c599f189e3f08c6e69aad213e67896cb1ca4d6 Apr 17 23:44:14.410583 unknown[884]: fetched base config from "system" Apr 17 23:44:14.410600 unknown[884]: fetched base config from "system" Apr 17 23:44:14.411253 ignition[884]: fetch: fetch complete Apr 17 23:44:14.410612 unknown[884]: fetched user config from "azure" Apr 17 23:44:14.411259 ignition[884]: fetch: fetch passed Apr 17 23:44:14.413977 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 17 23:44:14.411314 ignition[884]: Ignition finished successfully Apr 17 23:44:14.430804 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 17 23:44:14.447762 ignition[890]: Ignition 2.19.0 Apr 17 23:44:14.447775 ignition[890]: Stage: kargs Apr 17 23:44:14.447996 ignition[890]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:44:14.448008 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 17 23:44:14.449327 ignition[890]: kargs: kargs passed Apr 17 23:44:14.454855 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 17 23:44:14.449379 ignition[890]: Ignition finished successfully Apr 17 23:44:14.476938 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 17 23:44:14.493240 ignition[896]: Ignition 2.19.0 Apr 17 23:44:14.493253 ignition[896]: Stage: disks Apr 17 23:44:14.493471 ignition[896]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:44:14.500389 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 17 23:44:14.493484 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 17 23:44:14.504068 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 17 23:44:14.494761 ignition[896]: disks: disks passed Apr 17 23:44:14.494805 ignition[896]: Ignition finished successfully Apr 17 23:44:14.519390 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 23:44:14.519496 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:44:14.520416 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:44:14.520891 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:44:14.539961 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 17 23:44:14.601938 systemd-fsck[904]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Apr 17 23:44:14.606264 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 17 23:44:14.615866 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 17 23:44:14.711731 kernel: EXT4-fs (sda9): mounted filesystem d3c199f8-8065-4f33-a75b-da2f09d4fc39 r/w with ordered data mode. Quota mode: none. Apr 17 23:44:14.711900 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 17 23:44:14.712636 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 17 23:44:14.745850 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:44:14.760725 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (915) Apr 17 23:44:14.768879 kernel: BTRFS info (device sda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:44:14.768954 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:44:14.772736 kernel: BTRFS info (device sda6): using free space tree Apr 17 23:44:14.776866 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 17 23:44:14.785954 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 17 23:44:14.790517 kernel: BTRFS info (device sda6): auto enabling async discard Apr 17 23:44:14.795646 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 17 23:44:14.795694 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:44:14.807293 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:44:14.814545 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 17 23:44:14.825904 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 17 23:44:15.296918 systemd-networkd[873]: eth0: Gained IPv6LL Apr 17 23:44:15.557825 coreos-metadata[930]: Apr 17 23:44:15.557 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 17 23:44:15.564040 coreos-metadata[930]: Apr 17 23:44:15.563 INFO Fetch successful Apr 17 23:44:15.567163 coreos-metadata[930]: Apr 17 23:44:15.566 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Apr 17 23:44:15.578210 coreos-metadata[930]: Apr 17 23:44:15.578 INFO Fetch successful Apr 17 23:44:15.597727 coreos-metadata[930]: Apr 17 23:44:15.597 INFO wrote hostname ci-4081.3.6-n-7251cc3c8a to /sysroot/etc/hostname Apr 17 23:44:15.600594 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 17 23:44:15.735111 initrd-setup-root[945]: cut: /sysroot/etc/passwd: No such file or directory Apr 17 23:44:15.775538 initrd-setup-root[952]: cut: /sysroot/etc/group: No such file or directory Apr 17 23:44:15.781167 initrd-setup-root[959]: cut: /sysroot/etc/shadow: No such file or directory Apr 17 23:44:15.788236 initrd-setup-root[966]: cut: /sysroot/etc/gshadow: No such file or directory Apr 17 23:44:16.798674 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 17 23:44:16.806937 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 17 23:44:16.816909 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 17 23:44:16.824721 kernel: BTRFS info (device sda6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:44:16.828538 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 17 23:44:16.858446 ignition[1033]: INFO : Ignition 2.19.0 Apr 17 23:44:16.861115 ignition[1033]: INFO : Stage: mount Apr 17 23:44:16.861115 ignition[1033]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:44:16.861115 ignition[1033]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 17 23:44:16.861115 ignition[1033]: INFO : mount: mount passed Apr 17 23:44:16.861115 ignition[1033]: INFO : Ignition finished successfully Apr 17 23:44:16.861578 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 17 23:44:16.875350 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 17 23:44:16.892850 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 17 23:44:16.904844 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:44:16.925722 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1045) Apr 17 23:44:16.930722 kernel: BTRFS info (device sda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:44:16.930755 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:44:16.936775 kernel: BTRFS info (device sda6): using free space tree Apr 17 23:44:16.943723 kernel: BTRFS info (device sda6): auto enabling async discard Apr 17 23:44:16.945377 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:44:16.974731 ignition[1061]: INFO : Ignition 2.19.0 Apr 17 23:44:16.974731 ignition[1061]: INFO : Stage: files Apr 17 23:44:16.974731 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:44:16.974731 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 17 23:44:16.985671 ignition[1061]: DEBUG : files: compiled without relabeling support, skipping Apr 17 23:44:17.004143 ignition[1061]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 17 23:44:17.004143 ignition[1061]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 17 23:44:17.129258 ignition[1061]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 17 23:44:17.133796 ignition[1061]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 17 23:44:17.133796 ignition[1061]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 17 23:44:17.133796 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:44:17.133796 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 17 23:44:17.129740 unknown[1061]: wrote ssh authorized keys file for user: core Apr 17 23:44:17.233651 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 17 23:44:17.358009 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:44:17.364340 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 17 23:44:17.364340 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 17 23:44:17.364340 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:44:17.364340 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:44:17.364340 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:44:17.364340 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:44:17.364340 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:44:17.364340 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:44:17.364340 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:44:17.364340 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:44:17.364340 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 17 23:44:17.364340 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 17 23:44:17.364340 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 17 23:44:17.364340 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 17 23:44:17.741933 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 17 23:44:19.261810 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 17 23:44:19.261810 ignition[1061]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 17 23:44:19.277856 ignition[1061]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:44:19.283016 ignition[1061]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:44:19.283016 ignition[1061]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 17 23:44:19.291863 ignition[1061]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 17 23:44:19.291863 ignition[1061]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 17 23:44:19.291863 ignition[1061]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:44:19.291863 ignition[1061]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:44:19.291863 ignition[1061]: INFO : files: files passed Apr 17 23:44:19.291863 ignition[1061]: INFO : Ignition finished successfully Apr 17 23:44:19.299364 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 17 23:44:19.320961 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 17 23:44:19.325875 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 17 23:44:19.337130 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 17 23:44:19.337248 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 17 23:44:19.353448 initrd-setup-root-after-ignition[1091]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:44:19.353448 initrd-setup-root-after-ignition[1091]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:44:19.361948 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:44:19.360616 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:44:19.362560 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 17 23:44:19.379310 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 17 23:44:19.410661 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 17 23:44:19.410803 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 17 23:44:19.418258 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 17 23:44:19.423735 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 17 23:44:19.429378 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 17 23:44:19.437880 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 17 23:44:19.450839 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:44:19.459933 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 17 23:44:19.471721 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:44:19.471939 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:44:19.472653 systemd[1]: Stopped target timers.target - Timer Units. Apr 17 23:44:19.473471 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 17 23:44:19.473619 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:44:19.474320 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 17 23:44:19.474897 systemd[1]: Stopped target basic.target - Basic System. Apr 17 23:44:19.475556 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 17 23:44:19.476125 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:44:19.476623 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 17 23:44:19.477192 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 17 23:44:19.477682 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:44:19.478129 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 17 23:44:19.478545 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 17 23:44:19.479547 systemd[1]: Stopped target swap.target - Swaps. Apr 17 23:44:19.480550 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 17 23:44:19.480692 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:44:19.481947 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:44:19.482401 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:44:19.482796 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 17 23:44:19.520533 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:44:19.524194 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 17 23:44:19.524370 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 17 23:44:19.603139 ignition[1115]: INFO : Ignition 2.19.0 Apr 17 23:44:19.603139 ignition[1115]: INFO : Stage: umount Apr 17 23:44:19.603139 ignition[1115]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:44:19.603139 ignition[1115]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 17 23:44:19.603139 ignition[1115]: INFO : umount: umount passed Apr 17 23:44:19.603139 ignition[1115]: INFO : Ignition finished successfully Apr 17 23:44:19.528592 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 17 23:44:19.528723 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:44:19.529010 systemd[1]: ignition-files.service: Deactivated successfully. Apr 17 23:44:19.529106 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 17 23:44:19.529430 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 17 23:44:19.529527 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 17 23:44:19.576004 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 17 23:44:19.604994 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 17 23:44:19.608014 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 17 23:44:19.608224 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:44:19.619574 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 17 23:44:19.626747 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:44:19.635006 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 17 23:44:19.635116 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 17 23:44:19.641694 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 17 23:44:19.642241 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 17 23:44:19.642347 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 17 23:44:19.654774 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 17 23:44:19.654830 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 17 23:44:19.658804 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 17 23:44:19.658864 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 17 23:44:19.664040 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 17 23:44:19.664097 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 17 23:44:19.669292 systemd[1]: Stopped target network.target - Network. Apr 17 23:44:19.674405 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 17 23:44:19.674454 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:44:19.677731 systemd[1]: Stopped target paths.target - Path Units. Apr 17 23:44:19.685336 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 17 23:44:19.694100 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:44:19.702385 systemd[1]: Stopped target slices.target - Slice Units. Apr 17 23:44:19.741960 systemd[1]: Stopped target sockets.target - Socket Units. Apr 17 23:44:19.747026 systemd[1]: iscsid.socket: Deactivated successfully. Apr 17 23:44:19.747092 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:44:19.754214 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 17 23:44:19.754273 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:44:19.759080 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 17 23:44:19.759144 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 17 23:44:19.759259 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 17 23:44:19.759302 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 17 23:44:19.777333 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 17 23:44:19.783052 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 17 23:44:19.789303 systemd-networkd[873]: eth0: DHCPv6 lease lost Apr 17 23:44:19.791625 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 17 23:44:19.791734 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 17 23:44:19.798133 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 17 23:44:19.798321 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 17 23:44:19.812071 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 17 23:44:19.812135 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:44:19.823805 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 17 23:44:19.826537 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 17 23:44:19.826600 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:44:19.832508 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 23:44:19.835095 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:44:19.837897 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 17 23:44:19.837946 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 17 23:44:19.838048 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 17 23:44:19.838086 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:44:19.863432 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:44:19.887345 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 17 23:44:19.887524 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:44:19.894076 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 17 23:44:19.894126 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 17 23:44:19.902654 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 17 23:44:19.902701 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:44:19.903149 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 17 23:44:19.924102 kernel: hv_netvsc 000d3ab4-def0-000d-3ab4-def0000d3ab4 eth0: Data path switched from VF: enP23730s1 Apr 17 23:44:19.903195 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:44:19.910590 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 17 23:44:19.910648 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 17 23:44:19.939652 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:44:19.939759 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:44:19.952972 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 17 23:44:19.956306 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 17 23:44:19.956391 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:44:19.963051 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 17 23:44:19.965825 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:44:19.969462 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 17 23:44:19.969504 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:44:19.972599 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:44:19.972648 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:44:19.976412 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 17 23:44:19.976504 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 17 23:44:19.981814 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 17 23:44:19.981897 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 17 23:44:20.540595 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 17 23:44:20.540772 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 17 23:44:20.546993 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 17 23:44:20.555545 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 17 23:44:20.555623 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 17 23:44:20.571939 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 17 23:44:20.584533 systemd[1]: Switching root. Apr 17 23:44:20.657844 systemd-journald[177]: Journal stopped Apr 17 23:44:25.368038 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Apr 17 23:44:25.368076 kernel: SELinux: policy capability network_peer_controls=1 Apr 17 23:44:25.368099 kernel: SELinux: policy capability open_perms=1 Apr 17 23:44:25.368107 kernel: SELinux: policy capability extended_socket_class=1 Apr 17 23:44:25.368117 kernel: SELinux: policy capability always_check_network=0 Apr 17 23:44:25.368136 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 17 23:44:25.368150 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 17 23:44:25.368159 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 17 23:44:25.368170 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 17 23:44:25.368190 kernel: audit: type=1403 audit(1776469461.864:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 17 23:44:25.368206 systemd[1]: Successfully loaded SELinux policy in 254.221ms. Apr 17 23:44:25.368216 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.420ms. Apr 17 23:44:25.368229 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:44:25.368255 systemd[1]: Detected virtualization microsoft. Apr 17 23:44:25.368273 systemd[1]: Detected architecture x86-64. Apr 17 23:44:25.368283 systemd[1]: Detected first boot. Apr 17 23:44:25.368301 systemd[1]: Hostname set to . Apr 17 23:44:25.368322 systemd[1]: Initializing machine ID from random generator. Apr 17 23:44:25.368340 zram_generator::config[1158]: No configuration found. Apr 17 23:44:25.368354 systemd[1]: Populated /etc with preset unit settings. Apr 17 23:44:25.368364 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 17 23:44:25.368390 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 17 23:44:25.368409 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 17 23:44:25.368420 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 17 23:44:25.368430 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 17 23:44:25.368452 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 17 23:44:25.368472 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 17 23:44:25.368482 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 17 23:44:25.368495 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 17 23:44:25.368519 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 17 23:44:25.368532 systemd[1]: Created slice user.slice - User and Session Slice. Apr 17 23:44:25.368542 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:44:25.368560 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:44:25.368581 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 17 23:44:25.368602 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 17 23:44:25.368612 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 17 23:44:25.368624 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:44:25.368645 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 17 23:44:25.368661 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:44:25.368671 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 17 23:44:25.368689 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 17 23:44:25.368716 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 17 23:44:25.368728 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 17 23:44:25.368743 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:44:25.368762 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:44:25.368774 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:44:25.368786 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:44:25.368812 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 17 23:44:25.368833 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 17 23:44:25.368856 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:44:25.368881 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:44:25.368902 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:44:25.368925 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 17 23:44:25.368947 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 17 23:44:25.368967 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 17 23:44:25.368991 systemd[1]: Mounting media.mount - External Media Directory... Apr 17 23:44:25.369015 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:44:25.369038 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 17 23:44:25.369059 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 17 23:44:25.369083 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 17 23:44:25.369105 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 17 23:44:25.369128 systemd[1]: Reached target machines.target - Containers. Apr 17 23:44:25.369147 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 17 23:44:25.369176 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:44:25.369198 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:44:25.369220 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 17 23:44:25.369240 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:44:25.369264 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:44:25.369290 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:44:25.369311 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 17 23:44:25.369336 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:44:25.369367 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 17 23:44:25.369387 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 17 23:44:25.369408 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 17 23:44:25.369429 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 17 23:44:25.369452 systemd[1]: Stopped systemd-fsck-usr.service. Apr 17 23:44:25.369477 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:44:25.369500 kernel: loop: module loaded Apr 17 23:44:25.369517 kernel: fuse: init (API version 7.39) Apr 17 23:44:25.369536 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:44:25.369560 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 17 23:44:25.369585 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 17 23:44:25.369611 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:44:25.369660 systemd-journald[1250]: Collecting audit messages is disabled. Apr 17 23:44:25.369695 systemd[1]: verity-setup.service: Deactivated successfully. Apr 17 23:44:25.369721 systemd[1]: Stopped verity-setup.service. Apr 17 23:44:25.369738 kernel: ACPI: bus type drm_connector registered Apr 17 23:44:25.369753 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:44:25.369771 systemd-journald[1250]: Journal started Apr 17 23:44:25.369803 systemd-journald[1250]: Runtime Journal (/run/log/journal/001d5af20db54fe9ac2008b1f4d5694f) is 8.0M, max 158.7M, 150.7M free. Apr 17 23:44:24.545517 systemd[1]: Queued start job for default target multi-user.target. Apr 17 23:44:24.719761 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 17 23:44:24.720147 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 17 23:44:25.379013 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:44:25.379598 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 17 23:44:25.382800 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 17 23:44:25.386381 systemd[1]: Mounted media.mount - External Media Directory. Apr 17 23:44:25.389445 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 17 23:44:25.392971 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 17 23:44:25.396873 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 17 23:44:25.399990 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 17 23:44:25.403593 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:44:25.407530 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 17 23:44:25.407807 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 17 23:44:25.411407 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:44:25.411577 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:44:25.415940 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:44:25.416244 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:44:25.420440 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:44:25.420863 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:44:25.425417 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 17 23:44:25.425792 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 17 23:44:25.429814 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:44:25.430041 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:44:25.434099 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:44:25.438196 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 17 23:44:25.442571 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 17 23:44:25.466224 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 17 23:44:25.477787 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 17 23:44:25.488893 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 17 23:44:25.492279 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 17 23:44:25.492324 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:44:25.496914 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 17 23:44:25.502900 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 17 23:44:25.507903 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 17 23:44:25.511348 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:44:25.513316 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 17 23:44:25.518340 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 17 23:44:25.522301 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:44:25.523800 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 17 23:44:25.527666 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:44:25.529111 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:44:25.538860 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 17 23:44:25.554490 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:44:25.561485 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:44:25.566029 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 17 23:44:25.568439 systemd-journald[1250]: Time spent on flushing to /var/log/journal/001d5af20db54fe9ac2008b1f4d5694f is 50.617ms for 950 entries. Apr 17 23:44:25.568439 systemd-journald[1250]: System Journal (/var/log/journal/001d5af20db54fe9ac2008b1f4d5694f) is 8.0M, max 2.6G, 2.6G free. Apr 17 23:44:25.640869 systemd-journald[1250]: Received client request to flush runtime journal. Apr 17 23:44:25.640932 kernel: loop0: detected capacity change from 0 to 219192 Apr 17 23:44:25.572748 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 17 23:44:25.579979 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 17 23:44:25.586387 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 17 23:44:25.593061 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 17 23:44:25.604905 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 17 23:44:25.623371 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 17 23:44:25.647778 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 17 23:44:25.661425 udevadm[1304]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 17 23:44:25.693693 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 17 23:44:25.694990 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:44:25.701564 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 17 23:44:25.738733 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 17 23:44:25.781218 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. Apr 17 23:44:25.781245 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. Apr 17 23:44:25.790757 kernel: loop1: detected capacity change from 0 to 140768 Apr 17 23:44:25.794039 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:44:25.805900 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 17 23:44:25.897492 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 17 23:44:25.913908 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:44:25.931760 systemd-tmpfiles[1316]: ACLs are not supported, ignoring. Apr 17 23:44:25.931786 systemd-tmpfiles[1316]: ACLs are not supported, ignoring. Apr 17 23:44:25.936078 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:44:26.277747 kernel: loop2: detected capacity change from 0 to 31056 Apr 17 23:44:26.682287 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 17 23:44:26.693972 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:44:26.716727 systemd-udevd[1322]: Using default interface naming scheme 'v255'. Apr 17 23:44:26.743738 kernel: loop3: detected capacity change from 0 to 142488 Apr 17 23:44:26.894778 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:44:26.909655 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:44:26.982852 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 17 23:44:26.994717 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 17 23:44:27.085762 kernel: mousedev: PS/2 mouse device common for all mice Apr 17 23:44:27.091759 kernel: hv_vmbus: registering driver hv_balloon Apr 17 23:44:27.099542 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 17 23:44:27.117826 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Apr 17 23:44:27.159101 kernel: hv_vmbus: registering driver hyperv_fb Apr 17 23:44:27.168770 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Apr 17 23:44:27.175888 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Apr 17 23:44:27.182615 kernel: Console: switching to colour dummy device 80x25 Apr 17 23:44:27.191007 kernel: Console: switching to colour frame buffer device 128x48 Apr 17 23:44:27.271586 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#86 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 17 23:44:27.271911 kernel: loop4: detected capacity change from 0 to 219192 Apr 17 23:44:27.270667 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:44:27.296267 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:44:27.298444 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:44:27.324530 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:44:27.357279 kernel: loop5: detected capacity change from 0 to 140768 Apr 17 23:44:27.393744 kernel: loop6: detected capacity change from 0 to 31056 Apr 17 23:44:27.394997 systemd-networkd[1329]: lo: Link UP Apr 17 23:44:27.395011 systemd-networkd[1329]: lo: Gained carrier Apr 17 23:44:27.399621 systemd-networkd[1329]: Enumeration completed Apr 17 23:44:27.399755 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:44:27.410849 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 17 23:44:27.420204 systemd-networkd[1329]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:44:27.422795 systemd-networkd[1329]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:44:27.447183 kernel: loop7: detected capacity change from 0 to 142488 Apr 17 23:44:27.472516 (sd-merge)[1375]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Apr 17 23:44:27.473534 (sd-merge)[1375]: Merged extensions into '/usr'. Apr 17 23:44:27.495731 kernel: mlx5_core 5cb2:00:02.0 enP23730s1: Link up Apr 17 23:44:27.494109 systemd[1]: Reloading requested from client PID 1294 ('systemd-sysext') (unit systemd-sysext.service)... Apr 17 23:44:27.494136 systemd[1]: Reloading... Apr 17 23:44:27.523217 kernel: hv_netvsc 000d3ab4-def0-000d-3ab4-def0000d3ab4 eth0: Data path switched to VF: enP23730s1 Apr 17 23:44:27.530257 systemd-networkd[1329]: enP23730s1: Link UP Apr 17 23:44:27.530589 systemd-networkd[1329]: eth0: Link UP Apr 17 23:44:27.531258 systemd-networkd[1329]: eth0: Gained carrier Apr 17 23:44:27.531366 systemd-networkd[1329]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:44:27.537074 systemd-networkd[1329]: enP23730s1: Gained carrier Apr 17 23:44:27.572793 systemd-networkd[1329]: eth0: DHCPv4 address 10.0.0.19/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 17 23:44:27.592760 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (1325) Apr 17 23:44:27.624739 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Apr 17 23:44:27.655736 zram_generator::config[1418]: No configuration found. Apr 17 23:44:27.879531 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:44:27.963929 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 17 23:44:27.967752 systemd[1]: Reloading finished in 472 ms. Apr 17 23:44:28.002173 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 17 23:44:28.007001 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:44:28.026127 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 17 23:44:28.038914 systemd[1]: Starting ensure-sysext.service... Apr 17 23:44:28.049902 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 17 23:44:28.056903 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 17 23:44:28.063860 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:44:28.068799 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:44:28.068893 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:44:28.072495 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:44:28.079018 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:44:28.087364 systemd[1]: Reloading requested from client PID 1505 ('systemctl') (unit ensure-sysext.service)... Apr 17 23:44:28.087382 systemd[1]: Reloading... Apr 17 23:44:28.107920 systemd-tmpfiles[1508]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 17 23:44:28.108917 systemd-tmpfiles[1508]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 17 23:44:28.110440 systemd-tmpfiles[1508]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 17 23:44:28.111039 systemd-tmpfiles[1508]: ACLs are not supported, ignoring. Apr 17 23:44:28.111231 systemd-tmpfiles[1508]: ACLs are not supported, ignoring. Apr 17 23:44:28.137941 systemd-tmpfiles[1508]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:44:28.138160 systemd-tmpfiles[1508]: Skipping /boot Apr 17 23:44:28.160129 systemd-tmpfiles[1508]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:44:28.160295 systemd-tmpfiles[1508]: Skipping /boot Apr 17 23:44:28.197564 lvm[1506]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:44:28.228738 zram_generator::config[1549]: No configuration found. Apr 17 23:44:28.362555 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:44:28.444636 systemd[1]: Reloading finished in 354 ms. Apr 17 23:44:28.470182 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 17 23:44:28.475331 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 17 23:44:28.479775 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:44:28.483878 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:44:28.494256 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:44:28.504395 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:44:28.521037 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 17 23:44:28.525965 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 17 23:44:28.531347 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 17 23:44:28.543878 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:44:28.553187 lvm[1613]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:44:28.556051 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 17 23:44:28.567321 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:44:28.567586 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:44:28.574834 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:44:28.588761 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:44:28.598094 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:44:28.604498 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:44:28.604670 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:44:28.609262 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 17 23:44:28.614193 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:44:28.614394 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:44:28.619105 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:44:28.619339 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:44:28.628063 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:44:28.628274 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:44:28.651007 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:44:28.651362 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:44:28.658082 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:44:28.665996 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:44:28.682812 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:44:28.685899 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:44:28.686068 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:44:28.687768 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 17 23:44:28.692462 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 17 23:44:28.698151 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:44:28.698347 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:44:28.702207 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:44:28.702417 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:44:28.706260 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:44:28.706385 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:44:28.721804 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:44:28.722198 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:44:28.726975 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:44:28.731584 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:44:28.744217 augenrules[1646]: No rules Apr 17 23:44:28.744903 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:44:28.751395 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:44:28.753843 systemd-resolved[1617]: Positive Trust Anchors: Apr 17 23:44:28.753866 systemd-resolved[1617]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:44:28.753904 systemd-resolved[1617]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:44:28.755206 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:44:28.755457 systemd[1]: Reached target time-set.target - System Time Set. Apr 17 23:44:28.762031 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:44:28.763465 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:44:28.769294 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:44:28.769471 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:44:28.773784 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:44:28.773962 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:44:28.774455 systemd-resolved[1617]: Using system hostname 'ci-4081.3.6-n-7251cc3c8a'. Apr 17 23:44:28.778180 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:44:28.782045 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:44:28.782219 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:44:28.786222 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:44:28.786391 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:44:28.793307 systemd[1]: Finished ensure-sysext.service. Apr 17 23:44:28.799076 systemd[1]: Reached target network.target - Network. Apr 17 23:44:28.802021 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:44:28.805567 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:44:28.805649 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:44:29.201912 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 17 23:44:29.205860 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 17 23:44:29.377274 systemd-networkd[1329]: eth0: Gained IPv6LL Apr 17 23:44:29.380560 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 17 23:44:29.385088 systemd[1]: Reached target network-online.target - Network is Online. Apr 17 23:44:32.601809 ldconfig[1289]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 17 23:44:32.615283 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 17 23:44:32.624934 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 17 23:44:32.636477 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 17 23:44:32.640100 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:44:32.643561 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 17 23:44:32.647772 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 17 23:44:32.651451 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 17 23:44:32.654554 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 17 23:44:32.658227 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 17 23:44:32.665813 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 17 23:44:32.665864 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:44:32.668504 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:44:32.672113 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 17 23:44:32.676459 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 17 23:44:32.683662 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 17 23:44:32.687389 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 17 23:44:32.690481 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:44:32.693527 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:44:32.696596 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:44:32.696637 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:44:32.705810 systemd[1]: Starting chronyd.service - NTP client/server... Apr 17 23:44:32.710840 systemd[1]: Starting containerd.service - containerd container runtime... Apr 17 23:44:32.720900 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 17 23:44:32.728916 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 17 23:44:32.733839 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 17 23:44:32.739662 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 17 23:44:32.742662 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 17 23:44:32.742738 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Apr 17 23:44:32.748940 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Apr 17 23:44:32.752784 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Apr 17 23:44:32.755093 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:44:32.761679 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 17 23:44:32.767896 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 17 23:44:32.773862 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 17 23:44:32.785860 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 17 23:44:32.792896 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 17 23:44:32.801739 KVP[1674]: KVP starting; pid is:1674 Apr 17 23:44:32.812578 jq[1672]: false Apr 17 23:44:32.812861 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 17 23:44:32.816659 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 17 23:44:32.819240 (chronyd)[1668]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Apr 17 23:44:32.822018 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 17 23:44:32.823095 systemd[1]: Starting update-engine.service - Update Engine... Apr 17 23:44:32.827796 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 17 23:44:32.849270 kernel: hv_utils: KVP IC version 4.0 Apr 17 23:44:32.836102 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 17 23:44:32.831639 KVP[1674]: KVP LIC Version: 3.1 Apr 17 23:44:32.836391 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 17 23:44:32.859624 chronyd[1697]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Apr 17 23:44:32.877518 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 17 23:44:32.878267 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 17 23:44:32.892325 jq[1688]: true Apr 17 23:44:32.896947 chronyd[1697]: Timezone right/UTC failed leap second check, ignoring Apr 17 23:44:32.913941 extend-filesystems[1673]: Found loop4 Apr 17 23:44:32.913941 extend-filesystems[1673]: Found loop5 Apr 17 23:44:32.913941 extend-filesystems[1673]: Found loop6 Apr 17 23:44:32.913941 extend-filesystems[1673]: Found loop7 Apr 17 23:44:32.913941 extend-filesystems[1673]: Found sda Apr 17 23:44:32.913941 extend-filesystems[1673]: Found sda1 Apr 17 23:44:32.913941 extend-filesystems[1673]: Found sda2 Apr 17 23:44:32.913941 extend-filesystems[1673]: Found sda3 Apr 17 23:44:32.913941 extend-filesystems[1673]: Found usr Apr 17 23:44:32.913941 extend-filesystems[1673]: Found sda4 Apr 17 23:44:32.913941 extend-filesystems[1673]: Found sda6 Apr 17 23:44:32.913941 extend-filesystems[1673]: Found sda7 Apr 17 23:44:32.913941 extend-filesystems[1673]: Found sda9 Apr 17 23:44:32.913941 extend-filesystems[1673]: Checking size of /dev/sda9 Apr 17 23:44:32.907270 systemd[1]: Started chronyd.service - NTP client/server. Apr 17 23:44:32.897162 chronyd[1697]: Loaded seccomp filter (level 2) Apr 17 23:44:33.042008 update_engine[1686]: I20260417 23:44:33.025079 1686 main.cc:92] Flatcar Update Engine starting Apr 17 23:44:33.042008 update_engine[1686]: I20260417 23:44:33.026466 1686 update_check_scheduler.cc:74] Next update check in 5m53s Apr 17 23:44:33.049011 extend-filesystems[1673]: Old size kept for /dev/sda9 Apr 17 23:44:33.049011 extend-filesystems[1673]: Found sr0 Apr 17 23:44:33.074974 tar[1691]: linux-amd64/LICENSE Apr 17 23:44:33.074974 tar[1691]: linux-amd64/helm Apr 17 23:44:32.927065 (ntainerd)[1709]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 17 23:44:32.953317 dbus-daemon[1671]: [system] SELinux support is enabled Apr 17 23:44:32.928307 systemd[1]: motdgen.service: Deactivated successfully. Apr 17 23:44:33.081206 jq[1706]: true Apr 17 23:44:32.928527 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 17 23:44:32.953747 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 17 23:44:32.963277 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 17 23:44:32.963316 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 17 23:44:32.972336 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 17 23:44:32.972365 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 17 23:44:33.002384 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 17 23:44:33.003329 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 17 23:44:33.020262 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 17 23:44:33.028504 systemd[1]: Started update-engine.service - Update Engine. Apr 17 23:44:33.047471 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 17 23:44:33.051607 systemd-logind[1683]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 17 23:44:33.056926 systemd-logind[1683]: New seat seat0. Apr 17 23:44:33.062507 systemd[1]: Started systemd-logind.service - User Login Management. Apr 17 23:44:33.102853 coreos-metadata[1670]: Apr 17 23:44:33.102 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 17 23:44:33.108021 coreos-metadata[1670]: Apr 17 23:44:33.106 INFO Fetch successful Apr 17 23:44:33.108021 coreos-metadata[1670]: Apr 17 23:44:33.106 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Apr 17 23:44:33.112488 coreos-metadata[1670]: Apr 17 23:44:33.112 INFO Fetch successful Apr 17 23:44:33.112488 coreos-metadata[1670]: Apr 17 23:44:33.112 INFO Fetching http://168.63.129.16/machine/b04423e5-b35b-45b4-a1d3-0244d6e285b5/f1a57a70%2D5bdf%2D4ec3%2Da120%2Dd2b7c2050737.%5Fci%2D4081.3.6%2Dn%2D7251cc3c8a?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Apr 17 23:44:33.114801 coreos-metadata[1670]: Apr 17 23:44:33.114 INFO Fetch successful Apr 17 23:44:33.115150 coreos-metadata[1670]: Apr 17 23:44:33.115 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Apr 17 23:44:33.132530 coreos-metadata[1670]: Apr 17 23:44:33.130 INFO Fetch successful Apr 17 23:44:33.180730 bash[1749]: Updated "/home/core/.ssh/authorized_keys" Apr 17 23:44:33.185247 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 17 23:44:33.193690 sshd_keygen[1707]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 17 23:44:33.196971 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 17 23:44:33.212059 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 17 23:44:33.217558 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 17 23:44:33.282730 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (1733) Apr 17 23:44:33.318137 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 17 23:44:33.338016 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 17 23:44:33.351977 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Apr 17 23:44:33.438690 systemd[1]: issuegen.service: Deactivated successfully. Apr 17 23:44:33.438951 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 17 23:44:33.452953 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Apr 17 23:44:33.493274 locksmithd[1732]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 17 23:44:33.495975 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 17 23:44:33.531340 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 17 23:44:33.543019 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 17 23:44:33.560330 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 17 23:44:33.564872 systemd[1]: Reached target getty.target - Login Prompts. Apr 17 23:44:33.999739 tar[1691]: linux-amd64/README.md Apr 17 23:44:34.012604 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 17 23:44:34.223097 containerd[1709]: time="2026-04-17T23:44:34.222999700Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 17 23:44:34.264989 containerd[1709]: time="2026-04-17T23:44:34.264039500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:44:34.266113 containerd[1709]: time="2026-04-17T23:44:34.265968500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:44:34.266113 containerd[1709]: time="2026-04-17T23:44:34.266014200Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 17 23:44:34.266113 containerd[1709]: time="2026-04-17T23:44:34.266037200Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 17 23:44:34.266289 containerd[1709]: time="2026-04-17T23:44:34.266214900Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 17 23:44:34.266289 containerd[1709]: time="2026-04-17T23:44:34.266237900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 17 23:44:34.266358 containerd[1709]: time="2026-04-17T23:44:34.266313100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:44:34.266358 containerd[1709]: time="2026-04-17T23:44:34.266331700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:44:34.266588 containerd[1709]: time="2026-04-17T23:44:34.266552000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:44:34.266588 containerd[1709]: time="2026-04-17T23:44:34.266578900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 17 23:44:34.266694 containerd[1709]: time="2026-04-17T23:44:34.266598200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:44:34.266694 containerd[1709]: time="2026-04-17T23:44:34.266611500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 17 23:44:34.266818 containerd[1709]: time="2026-04-17T23:44:34.266740000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:44:34.267614 containerd[1709]: time="2026-04-17T23:44:34.267015700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:44:34.267614 containerd[1709]: time="2026-04-17T23:44:34.267191100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:44:34.267614 containerd[1709]: time="2026-04-17T23:44:34.267211700Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 17 23:44:34.267614 containerd[1709]: time="2026-04-17T23:44:34.267311400Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 17 23:44:34.267614 containerd[1709]: time="2026-04-17T23:44:34.267364100Z" level=info msg="metadata content store policy set" policy=shared Apr 17 23:44:34.290828 containerd[1709]: time="2026-04-17T23:44:34.290441300Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 17 23:44:34.290828 containerd[1709]: time="2026-04-17T23:44:34.290493700Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 17 23:44:34.290828 containerd[1709]: time="2026-04-17T23:44:34.290517800Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 17 23:44:34.290828 containerd[1709]: time="2026-04-17T23:44:34.290539500Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 17 23:44:34.290828 containerd[1709]: time="2026-04-17T23:44:34.290568000Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 17 23:44:34.290828 containerd[1709]: time="2026-04-17T23:44:34.290727200Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 17 23:44:34.291626 containerd[1709]: time="2026-04-17T23:44:34.291539200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 17 23:44:34.291742 containerd[1709]: time="2026-04-17T23:44:34.291695100Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 17 23:44:34.291788 containerd[1709]: time="2026-04-17T23:44:34.291751900Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 17 23:44:34.291788 containerd[1709]: time="2026-04-17T23:44:34.291780900Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 17 23:44:34.291880 containerd[1709]: time="2026-04-17T23:44:34.291808200Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 17 23:44:34.291880 containerd[1709]: time="2026-04-17T23:44:34.291835100Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 17 23:44:34.291880 containerd[1709]: time="2026-04-17T23:44:34.291859200Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 17 23:44:34.291984 containerd[1709]: time="2026-04-17T23:44:34.291892000Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 17 23:44:34.291984 containerd[1709]: time="2026-04-17T23:44:34.291919300Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 17 23:44:34.291984 containerd[1709]: time="2026-04-17T23:44:34.291945300Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 17 23:44:34.291984 containerd[1709]: time="2026-04-17T23:44:34.291968600Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 17 23:44:34.292127 containerd[1709]: time="2026-04-17T23:44:34.291991600Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 17 23:44:34.292127 containerd[1709]: time="2026-04-17T23:44:34.292025700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 17 23:44:34.292127 containerd[1709]: time="2026-04-17T23:44:34.292052200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 17 23:44:34.292127 containerd[1709]: time="2026-04-17T23:44:34.292084700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 17 23:44:34.292127 containerd[1709]: time="2026-04-17T23:44:34.292112200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 17 23:44:34.292310 containerd[1709]: time="2026-04-17T23:44:34.292132100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 17 23:44:34.292310 containerd[1709]: time="2026-04-17T23:44:34.292158100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 17 23:44:34.292310 containerd[1709]: time="2026-04-17T23:44:34.292181100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 17 23:44:34.292310 containerd[1709]: time="2026-04-17T23:44:34.292204900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 17 23:44:34.292310 containerd[1709]: time="2026-04-17T23:44:34.292247900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 17 23:44:34.292310 containerd[1709]: time="2026-04-17T23:44:34.292280700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 17 23:44:34.292310 containerd[1709]: time="2026-04-17T23:44:34.292304500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 17 23:44:34.292588 containerd[1709]: time="2026-04-17T23:44:34.292325400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 17 23:44:34.292588 containerd[1709]: time="2026-04-17T23:44:34.292348800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 17 23:44:34.292588 containerd[1709]: time="2026-04-17T23:44:34.292378200Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 17 23:44:34.292588 containerd[1709]: time="2026-04-17T23:44:34.292416200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 17 23:44:34.292588 containerd[1709]: time="2026-04-17T23:44:34.292447300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 17 23:44:34.292588 containerd[1709]: time="2026-04-17T23:44:34.292471900Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 17 23:44:34.292588 containerd[1709]: time="2026-04-17T23:44:34.292540100Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 17 23:44:34.292588 containerd[1709]: time="2026-04-17T23:44:34.292569900Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 17 23:44:34.292588 containerd[1709]: time="2026-04-17T23:44:34.292587400Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 17 23:44:34.294937 containerd[1709]: time="2026-04-17T23:44:34.292612100Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 17 23:44:34.294937 containerd[1709]: time="2026-04-17T23:44:34.292632700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 17 23:44:34.294937 containerd[1709]: time="2026-04-17T23:44:34.292656200Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 17 23:44:34.294937 containerd[1709]: time="2026-04-17T23:44:34.292672600Z" level=info msg="NRI interface is disabled by configuration." Apr 17 23:44:34.294937 containerd[1709]: time="2026-04-17T23:44:34.292693000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 17 23:44:34.295117 containerd[1709]: time="2026-04-17T23:44:34.293146800Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 17 23:44:34.295117 containerd[1709]: time="2026-04-17T23:44:34.293245500Z" level=info msg="Connect containerd service" Apr 17 23:44:34.295117 containerd[1709]: time="2026-04-17T23:44:34.293309500Z" level=info msg="using legacy CRI server" Apr 17 23:44:34.295117 containerd[1709]: time="2026-04-17T23:44:34.293326100Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 17 23:44:34.295117 containerd[1709]: time="2026-04-17T23:44:34.293682500Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 17 23:44:34.295117 containerd[1709]: time="2026-04-17T23:44:34.294977100Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 23:44:34.301568 containerd[1709]: time="2026-04-17T23:44:34.296158500Z" level=info msg="Start subscribing containerd event" Apr 17 23:44:34.301568 containerd[1709]: time="2026-04-17T23:44:34.296235400Z" level=info msg="Start recovering state" Apr 17 23:44:34.301568 containerd[1709]: time="2026-04-17T23:44:34.296311000Z" level=info msg="Start event monitor" Apr 17 23:44:34.301568 containerd[1709]: time="2026-04-17T23:44:34.296324200Z" level=info msg="Start snapshots syncer" Apr 17 23:44:34.301568 containerd[1709]: time="2026-04-17T23:44:34.296335900Z" level=info msg="Start cni network conf syncer for default" Apr 17 23:44:34.301568 containerd[1709]: time="2026-04-17T23:44:34.296351800Z" level=info msg="Start streaming server" Apr 17 23:44:34.301568 containerd[1709]: time="2026-04-17T23:44:34.296838600Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 17 23:44:34.301568 containerd[1709]: time="2026-04-17T23:44:34.296899200Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 17 23:44:34.297075 systemd[1]: Started containerd.service - containerd container runtime. Apr 17 23:44:34.302901 containerd[1709]: time="2026-04-17T23:44:34.302878900Z" level=info msg="containerd successfully booted in 0.082277s" Apr 17 23:44:34.466577 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:44:34.471310 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 17 23:44:34.474701 systemd[1]: Startup finished in 1.015s (kernel) + 12.775s (initrd) + 12.863s (userspace) = 26.654s. Apr 17 23:44:34.482918 (kubelet)[1829]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:44:34.917268 login[1812]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 17 23:44:34.921146 login[1813]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 17 23:44:34.932214 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 17 23:44:34.939804 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 17 23:44:34.943052 systemd-logind[1683]: New session 1 of user core. Apr 17 23:44:34.946763 systemd-logind[1683]: New session 2 of user core. Apr 17 23:44:34.975375 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 17 23:44:34.981204 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 17 23:44:34.995946 (systemd)[1841]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 17 23:44:35.088739 kubelet[1829]: E0417 23:44:35.085615 1829 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:44:35.091200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:44:35.091442 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:44:35.149007 systemd[1841]: Queued start job for default target default.target. Apr 17 23:44:35.155820 systemd[1841]: Created slice app.slice - User Application Slice. Apr 17 23:44:35.155857 systemd[1841]: Reached target paths.target - Paths. Apr 17 23:44:35.155875 systemd[1841]: Reached target timers.target - Timers. Apr 17 23:44:35.157123 systemd[1841]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 17 23:44:35.169249 systemd[1841]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 17 23:44:35.169513 systemd[1841]: Reached target sockets.target - Sockets. Apr 17 23:44:35.169632 systemd[1841]: Reached target basic.target - Basic System. Apr 17 23:44:35.169778 systemd[1841]: Reached target default.target - Main User Target. Apr 17 23:44:35.170046 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 17 23:44:35.170200 systemd[1841]: Startup finished in 165ms. Apr 17 23:44:35.177861 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 17 23:44:35.179467 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 17 23:44:35.688988 waagent[1805]: 2026-04-17T23:44:35.688872Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Apr 17 23:44:35.692475 waagent[1805]: 2026-04-17T23:44:35.692406Z INFO Daemon Daemon OS: flatcar 4081.3.6 Apr 17 23:44:35.695076 waagent[1805]: 2026-04-17T23:44:35.695019Z INFO Daemon Daemon Python: 3.11.9 Apr 17 23:44:35.697530 waagent[1805]: 2026-04-17T23:44:35.697475Z INFO Daemon Daemon Run daemon Apr 17 23:44:35.700085 waagent[1805]: 2026-04-17T23:44:35.699932Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Apr 17 23:44:35.704910 waagent[1805]: 2026-04-17T23:44:35.704855Z INFO Daemon Daemon Using waagent for provisioning Apr 17 23:44:35.707919 waagent[1805]: 2026-04-17T23:44:35.707873Z INFO Daemon Daemon Activate resource disk Apr 17 23:44:35.710583 waagent[1805]: 2026-04-17T23:44:35.710531Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Apr 17 23:44:35.718014 waagent[1805]: 2026-04-17T23:44:35.717959Z INFO Daemon Daemon Found device: None Apr 17 23:44:35.731339 waagent[1805]: 2026-04-17T23:44:35.718241Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Apr 17 23:44:35.731339 waagent[1805]: 2026-04-17T23:44:35.719294Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Apr 17 23:44:35.731339 waagent[1805]: 2026-04-17T23:44:35.721893Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 17 23:44:35.731339 waagent[1805]: 2026-04-17T23:44:35.722766Z INFO Daemon Daemon Running default provisioning handler Apr 17 23:44:35.751887 waagent[1805]: 2026-04-17T23:44:35.731431Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Apr 17 23:44:35.751887 waagent[1805]: 2026-04-17T23:44:35.732523Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Apr 17 23:44:35.751887 waagent[1805]: 2026-04-17T23:44:35.733568Z INFO Daemon Daemon cloud-init is enabled: False Apr 17 23:44:35.751887 waagent[1805]: 2026-04-17T23:44:35.734177Z INFO Daemon Daemon Copying ovf-env.xml Apr 17 23:44:35.884727 waagent[1805]: 2026-04-17T23:44:35.882285Z INFO Daemon Daemon Successfully mounted dvd Apr 17 23:44:35.912488 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Apr 17 23:44:35.914745 waagent[1805]: 2026-04-17T23:44:35.913936Z INFO Daemon Daemon Detect protocol endpoint Apr 17 23:44:35.917116 waagent[1805]: 2026-04-17T23:44:35.917052Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 17 23:44:35.920243 waagent[1805]: 2026-04-17T23:44:35.920190Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Apr 17 23:44:35.923727 waagent[1805]: 2026-04-17T23:44:35.923659Z INFO Daemon Daemon Test for route to 168.63.129.16 Apr 17 23:44:35.926791 waagent[1805]: 2026-04-17T23:44:35.926738Z INFO Daemon Daemon Route to 168.63.129.16 exists Apr 17 23:44:35.929715 waagent[1805]: 2026-04-17T23:44:35.929660Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Apr 17 23:44:35.956279 waagent[1805]: 2026-04-17T23:44:35.956165Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Apr 17 23:44:35.965280 waagent[1805]: 2026-04-17T23:44:35.956685Z INFO Daemon Daemon Wire protocol version:2012-11-30 Apr 17 23:44:35.965280 waagent[1805]: 2026-04-17T23:44:35.957138Z INFO Daemon Daemon Server preferred version:2015-04-05 Apr 17 23:44:36.061444 waagent[1805]: 2026-04-17T23:44:36.061335Z INFO Daemon Daemon Initializing goal state during protocol detection Apr 17 23:44:36.065346 waagent[1805]: 2026-04-17T23:44:36.065212Z INFO Daemon Daemon Forcing an update of the goal state. Apr 17 23:44:36.070835 waagent[1805]: 2026-04-17T23:44:36.070773Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Apr 17 23:44:36.084903 waagent[1805]: 2026-04-17T23:44:36.084847Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.181 Apr 17 23:44:36.086274 waagent[1805]: 2026-04-17T23:44:36.085557Z INFO Daemon Apr 17 23:44:36.086274 waagent[1805]: 2026-04-17T23:44:36.086219Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 297f884c-349b-46d6-8a18-ed8ec28ca400 eTag: 11547147417606895599 source: Fabric] Apr 17 23:44:36.086660 waagent[1805]: 2026-04-17T23:44:36.086609Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Apr 17 23:44:36.087254 waagent[1805]: 2026-04-17T23:44:36.087197Z INFO Daemon Apr 17 23:44:36.087370 waagent[1805]: 2026-04-17T23:44:36.087321Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Apr 17 23:44:36.109548 waagent[1805]: 2026-04-17T23:44:36.109489Z INFO Daemon Daemon Downloading artifacts profile blob Apr 17 23:44:36.249195 waagent[1805]: 2026-04-17T23:44:36.249105Z INFO Daemon Downloaded certificate {'thumbprint': 'F161EFB58251B83B08AB41DCC256F289062DCC3F', 'hasPrivateKey': True} Apr 17 23:44:36.256394 waagent[1805]: 2026-04-17T23:44:36.249862Z INFO Daemon Fetch goal state completed Apr 17 23:44:36.295512 waagent[1805]: 2026-04-17T23:44:36.295425Z INFO Daemon Daemon Starting provisioning Apr 17 23:44:36.298737 waagent[1805]: 2026-04-17T23:44:36.295787Z INFO Daemon Daemon Handle ovf-env.xml. Apr 17 23:44:36.298737 waagent[1805]: 2026-04-17T23:44:36.296839Z INFO Daemon Daemon Set hostname [ci-4081.3.6-n-7251cc3c8a] Apr 17 23:44:36.300284 waagent[1805]: 2026-04-17T23:44:36.300233Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-n-7251cc3c8a] Apr 17 23:44:36.302937 waagent[1805]: 2026-04-17T23:44:36.301344Z INFO Daemon Daemon Examine /proc/net/route for primary interface Apr 17 23:44:36.302937 waagent[1805]: 2026-04-17T23:44:36.302325Z INFO Daemon Daemon Primary interface is [eth0] Apr 17 23:44:36.329090 systemd-networkd[1329]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:44:36.329393 systemd-networkd[1329]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:44:36.329509 systemd-networkd[1329]: eth0: DHCP lease lost Apr 17 23:44:36.330326 waagent[1805]: 2026-04-17T23:44:36.330253Z INFO Daemon Daemon Create user account if not exists Apr 17 23:44:36.333475 waagent[1805]: 2026-04-17T23:44:36.333419Z INFO Daemon Daemon User core already exists, skip useradd Apr 17 23:44:36.336886 waagent[1805]: 2026-04-17T23:44:36.336828Z INFO Daemon Daemon Configure sudoer Apr 17 23:44:36.339162 systemd-networkd[1329]: eth0: DHCPv6 lease lost Apr 17 23:44:36.339624 waagent[1805]: 2026-04-17T23:44:36.339570Z INFO Daemon Daemon Configure sshd Apr 17 23:44:36.341842 waagent[1805]: 2026-04-17T23:44:36.341789Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Apr 17 23:44:36.347590 waagent[1805]: 2026-04-17T23:44:36.341975Z INFO Daemon Daemon Deploy ssh public key. Apr 17 23:44:36.389759 systemd-networkd[1329]: eth0: DHCPv4 address 10.0.0.19/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 17 23:44:37.487305 waagent[1805]: 2026-04-17T23:44:37.487211Z INFO Daemon Daemon Provisioning complete Apr 17 23:44:37.501077 waagent[1805]: 2026-04-17T23:44:37.501018Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Apr 17 23:44:37.509132 waagent[1805]: 2026-04-17T23:44:37.501333Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Apr 17 23:44:37.509132 waagent[1805]: 2026-04-17T23:44:37.502412Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Apr 17 23:44:37.627613 waagent[1894]: 2026-04-17T23:44:37.627515Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Apr 17 23:44:37.628082 waagent[1894]: 2026-04-17T23:44:37.627685Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Apr 17 23:44:37.628082 waagent[1894]: 2026-04-17T23:44:37.627794Z INFO ExtHandler ExtHandler Python: 3.11.9 Apr 17 23:44:37.678446 waagent[1894]: 2026-04-17T23:44:37.678351Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Apr 17 23:44:37.678671 waagent[1894]: 2026-04-17T23:44:37.678619Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 17 23:44:37.678783 waagent[1894]: 2026-04-17T23:44:37.678738Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 17 23:44:37.685981 waagent[1894]: 2026-04-17T23:44:37.685912Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Apr 17 23:44:37.690945 waagent[1894]: 2026-04-17T23:44:37.690887Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.181 Apr 17 23:44:37.691396 waagent[1894]: 2026-04-17T23:44:37.691338Z INFO ExtHandler Apr 17 23:44:37.691472 waagent[1894]: 2026-04-17T23:44:37.691430Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 1429041e-ab64-41ba-b56a-a780be112f4c eTag: 11547147417606895599 source: Fabric] Apr 17 23:44:37.691793 waagent[1894]: 2026-04-17T23:44:37.691741Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Apr 17 23:44:37.692355 waagent[1894]: 2026-04-17T23:44:37.692299Z INFO ExtHandler Apr 17 23:44:37.692418 waagent[1894]: 2026-04-17T23:44:37.692384Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Apr 17 23:44:37.695906 waagent[1894]: 2026-04-17T23:44:37.695859Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Apr 17 23:44:37.754058 waagent[1894]: 2026-04-17T23:44:37.753920Z INFO ExtHandler Downloaded certificate {'thumbprint': 'F161EFB58251B83B08AB41DCC256F289062DCC3F', 'hasPrivateKey': True} Apr 17 23:44:37.754531 waagent[1894]: 2026-04-17T23:44:37.754470Z INFO ExtHandler Fetch goal state completed Apr 17 23:44:37.770344 waagent[1894]: 2026-04-17T23:44:37.770273Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1894 Apr 17 23:44:37.770501 waagent[1894]: 2026-04-17T23:44:37.770451Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Apr 17 23:44:37.772068 waagent[1894]: 2026-04-17T23:44:37.772012Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Apr 17 23:44:37.772436 waagent[1894]: 2026-04-17T23:44:37.772384Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Apr 17 23:44:37.807134 waagent[1894]: 2026-04-17T23:44:37.807084Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Apr 17 23:44:37.807364 waagent[1894]: 2026-04-17T23:44:37.807314Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Apr 17 23:44:37.813886 waagent[1894]: 2026-04-17T23:44:37.813837Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Apr 17 23:44:37.820908 systemd[1]: Reloading requested from client PID 1907 ('systemctl') (unit waagent.service)... Apr 17 23:44:37.820927 systemd[1]: Reloading... Apr 17 23:44:37.907598 zram_generator::config[1937]: No configuration found. Apr 17 23:44:38.049535 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:44:38.131737 systemd[1]: Reloading finished in 310 ms. Apr 17 23:44:38.158738 waagent[1894]: 2026-04-17T23:44:38.158303Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Apr 17 23:44:38.166657 systemd[1]: Reloading requested from client PID 1998 ('systemctl') (unit waagent.service)... Apr 17 23:44:38.166675 systemd[1]: Reloading... Apr 17 23:44:38.247743 zram_generator::config[2028]: No configuration found. Apr 17 23:44:38.381692 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:44:38.468630 systemd[1]: Reloading finished in 301 ms. Apr 17 23:44:38.497698 waagent[1894]: 2026-04-17T23:44:38.496476Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Apr 17 23:44:38.497698 waagent[1894]: 2026-04-17T23:44:38.496658Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Apr 17 23:44:39.529936 waagent[1894]: 2026-04-17T23:44:39.529838Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Apr 17 23:44:39.530602 waagent[1894]: 2026-04-17T23:44:39.530538Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Apr 17 23:44:39.531460 waagent[1894]: 2026-04-17T23:44:39.531398Z INFO ExtHandler ExtHandler Starting env monitor service. Apr 17 23:44:39.531652 waagent[1894]: 2026-04-17T23:44:39.531602Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 17 23:44:39.541100 waagent[1894]: 2026-04-17T23:44:39.541046Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 17 23:44:39.541452 waagent[1894]: 2026-04-17T23:44:39.541356Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Apr 17 23:44:39.541610 waagent[1894]: 2026-04-17T23:44:39.541524Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 17 23:44:39.541923 waagent[1894]: 2026-04-17T23:44:39.541797Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Apr 17 23:44:39.542096 waagent[1894]: 2026-04-17T23:44:39.541984Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 17 23:44:39.542282 waagent[1894]: 2026-04-17T23:44:39.542233Z INFO EnvHandler ExtHandler Configure routes Apr 17 23:44:39.542803 waagent[1894]: 2026-04-17T23:44:39.542694Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Apr 17 23:44:39.542803 waagent[1894]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Apr 17 23:44:39.542803 waagent[1894]: eth0 00000000 0100000A 0003 0 0 1024 00000000 0 0 0 Apr 17 23:44:39.542803 waagent[1894]: eth0 0000000A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Apr 17 23:44:39.542803 waagent[1894]: eth0 0100000A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Apr 17 23:44:39.542803 waagent[1894]: eth0 10813FA8 0100000A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 17 23:44:39.542803 waagent[1894]: eth0 FEA9FEA9 0100000A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 17 23:44:39.543482 waagent[1894]: 2026-04-17T23:44:39.543184Z INFO EnvHandler ExtHandler Gateway:None Apr 17 23:44:39.543482 waagent[1894]: 2026-04-17T23:44:39.543280Z INFO EnvHandler ExtHandler Routes:None Apr 17 23:44:39.543888 waagent[1894]: 2026-04-17T23:44:39.543698Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Apr 17 23:44:39.543888 waagent[1894]: 2026-04-17T23:44:39.543764Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Apr 17 23:44:39.544210 waagent[1894]: 2026-04-17T23:44:39.544158Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Apr 17 23:44:39.544271 waagent[1894]: 2026-04-17T23:44:39.544215Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Apr 17 23:44:39.544519 waagent[1894]: 2026-04-17T23:44:39.544477Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Apr 17 23:44:39.549258 waagent[1894]: 2026-04-17T23:44:39.549212Z INFO ExtHandler ExtHandler Apr 17 23:44:39.550729 waagent[1894]: 2026-04-17T23:44:39.550678Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 25578535-1f1f-4a6f-8dac-78817f9419da correlation fb8ae881-29fd-4784-a1f2-dfd5ec625918 created: 2026-04-17T23:43:39.519844Z] Apr 17 23:44:39.551116 waagent[1894]: 2026-04-17T23:44:39.551068Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Apr 17 23:44:39.551623 waagent[1894]: 2026-04-17T23:44:39.551577Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Apr 17 23:44:39.585446 waagent[1894]: 2026-04-17T23:44:39.585383Z INFO MonitorHandler ExtHandler Network interfaces: Apr 17 23:44:39.585446 waagent[1894]: Executing ['ip', '-a', '-o', 'link']: Apr 17 23:44:39.585446 waagent[1894]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Apr 17 23:44:39.585446 waagent[1894]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b4:de:f0 brd ff:ff:ff:ff:ff:ff Apr 17 23:44:39.585446 waagent[1894]: 3: enP23730s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b4:de:f0 brd ff:ff:ff:ff:ff:ff\ altname enP23730p0s2 Apr 17 23:44:39.585446 waagent[1894]: Executing ['ip', '-4', '-a', '-o', 'address']: Apr 17 23:44:39.585446 waagent[1894]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Apr 17 23:44:39.585446 waagent[1894]: 2: eth0 inet 10.0.0.19/24 metric 1024 brd 10.0.0.255 scope global eth0\ valid_lft forever preferred_lft forever Apr 17 23:44:39.585446 waagent[1894]: Executing ['ip', '-6', '-a', '-o', 'address']: Apr 17 23:44:39.585446 waagent[1894]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Apr 17 23:44:39.585446 waagent[1894]: 2: eth0 inet6 fe80::20d:3aff:feb4:def0/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Apr 17 23:44:39.585929 waagent[1894]: 2026-04-17T23:44:39.585568Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: A6DB2933-BFAF-414C-B741-7FF29D0344F2;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Apr 17 23:44:39.644184 waagent[1894]: 2026-04-17T23:44:39.644102Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Apr 17 23:44:39.644184 waagent[1894]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 17 23:44:39.644184 waagent[1894]: pkts bytes target prot opt in out source destination Apr 17 23:44:39.644184 waagent[1894]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 17 23:44:39.644184 waagent[1894]: pkts bytes target prot opt in out source destination Apr 17 23:44:39.644184 waagent[1894]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Apr 17 23:44:39.644184 waagent[1894]: pkts bytes target prot opt in out source destination Apr 17 23:44:39.644184 waagent[1894]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 17 23:44:39.644184 waagent[1894]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 17 23:44:39.644184 waagent[1894]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 17 23:44:39.647778 waagent[1894]: 2026-04-17T23:44:39.647677Z INFO EnvHandler ExtHandler Current Firewall rules: Apr 17 23:44:39.647778 waagent[1894]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 17 23:44:39.647778 waagent[1894]: pkts bytes target prot opt in out source destination Apr 17 23:44:39.647778 waagent[1894]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 17 23:44:39.647778 waagent[1894]: pkts bytes target prot opt in out source destination Apr 17 23:44:39.647778 waagent[1894]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Apr 17 23:44:39.647778 waagent[1894]: pkts bytes target prot opt in out source destination Apr 17 23:44:39.647778 waagent[1894]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 17 23:44:39.647778 waagent[1894]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 17 23:44:39.647778 waagent[1894]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 17 23:44:39.648189 waagent[1894]: 2026-04-17T23:44:39.648047Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Apr 17 23:44:45.231857 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 17 23:44:45.238946 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:44:45.344593 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:44:45.349217 (kubelet)[2128]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:44:46.014370 kubelet[2128]: E0417 23:44:46.014270 2128 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:44:46.018057 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:44:46.018287 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:44:56.048013 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 17 23:44:56.049035 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 17 23:44:56.056920 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:44:56.060018 systemd[1]: Started sshd@0-10.0.0.19:22-20.229.252.112:48830.service - OpenSSH per-connection server daemon (20.229.252.112:48830). Apr 17 23:44:56.688873 chronyd[1697]: Selected source PHC0 Apr 17 23:44:56.837250 sshd[2137]: Accepted publickey for core from 20.229.252.112 port 48830 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:44:56.838760 sshd[2137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:44:56.842833 systemd-logind[1683]: New session 3 of user core. Apr 17 23:44:56.847869 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 17 23:44:56.881226 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:44:56.893081 (kubelet)[2147]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:44:56.929538 kubelet[2147]: E0417 23:44:56.929446 2147 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:44:56.931889 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:44:56.932105 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:44:56.976046 systemd[1]: Started sshd@1-10.0.0.19:22-20.229.252.112:48840.service - OpenSSH per-connection server daemon (20.229.252.112:48840). Apr 17 23:44:57.106691 sshd[2157]: Accepted publickey for core from 20.229.252.112 port 48840 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:44:57.108181 sshd[2157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:44:57.112759 systemd-logind[1683]: New session 4 of user core. Apr 17 23:44:57.118852 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 17 23:44:57.217171 sshd[2157]: pam_unix(sshd:session): session closed for user core Apr 17 23:44:57.220938 systemd[1]: sshd@1-10.0.0.19:22-20.229.252.112:48840.service: Deactivated successfully. Apr 17 23:44:57.222857 systemd[1]: session-4.scope: Deactivated successfully. Apr 17 23:44:57.223558 systemd-logind[1683]: Session 4 logged out. Waiting for processes to exit. Apr 17 23:44:57.224600 systemd-logind[1683]: Removed session 4. Apr 17 23:44:57.242243 systemd[1]: Started sshd@2-10.0.0.19:22-20.229.252.112:48848.service - OpenSSH per-connection server daemon (20.229.252.112:48848). Apr 17 23:44:57.361736 sshd[2164]: Accepted publickey for core from 20.229.252.112 port 48848 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:44:57.363058 sshd[2164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:44:57.367654 systemd-logind[1683]: New session 5 of user core. Apr 17 23:44:57.373873 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 17 23:44:57.468000 sshd[2164]: pam_unix(sshd:session): session closed for user core Apr 17 23:44:57.471735 systemd[1]: sshd@2-10.0.0.19:22-20.229.252.112:48848.service: Deactivated successfully. Apr 17 23:44:57.473607 systemd[1]: session-5.scope: Deactivated successfully. Apr 17 23:44:57.474335 systemd-logind[1683]: Session 5 logged out. Waiting for processes to exit. Apr 17 23:44:57.475366 systemd-logind[1683]: Removed session 5. Apr 17 23:44:57.496392 systemd[1]: Started sshd@3-10.0.0.19:22-20.229.252.112:48864.service - OpenSSH per-connection server daemon (20.229.252.112:48864). Apr 17 23:44:57.629172 sshd[2171]: Accepted publickey for core from 20.229.252.112 port 48864 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:44:57.630673 sshd[2171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:44:57.635328 systemd-logind[1683]: New session 6 of user core. Apr 17 23:44:57.650894 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 17 23:44:57.750553 sshd[2171]: pam_unix(sshd:session): session closed for user core Apr 17 23:44:57.754511 systemd[1]: sshd@3-10.0.0.19:22-20.229.252.112:48864.service: Deactivated successfully. Apr 17 23:44:57.756403 systemd[1]: session-6.scope: Deactivated successfully. Apr 17 23:44:57.757183 systemd-logind[1683]: Session 6 logged out. Waiting for processes to exit. Apr 17 23:44:57.758325 systemd-logind[1683]: Removed session 6. Apr 17 23:44:57.776304 systemd[1]: Started sshd@4-10.0.0.19:22-20.229.252.112:48874.service - OpenSSH per-connection server daemon (20.229.252.112:48874). Apr 17 23:44:57.897778 sshd[2178]: Accepted publickey for core from 20.229.252.112 port 48874 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:44:57.899216 sshd[2178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:44:57.904249 systemd-logind[1683]: New session 7 of user core. Apr 17 23:44:57.914208 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 17 23:44:58.150315 sudo[2181]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 17 23:44:58.150727 sudo[2181]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:44:58.181043 sudo[2181]: pam_unix(sudo:session): session closed for user root Apr 17 23:44:58.197664 sshd[2178]: pam_unix(sshd:session): session closed for user core Apr 17 23:44:58.200808 systemd[1]: sshd@4-10.0.0.19:22-20.229.252.112:48874.service: Deactivated successfully. Apr 17 23:44:58.202917 systemd[1]: session-7.scope: Deactivated successfully. Apr 17 23:44:58.204374 systemd-logind[1683]: Session 7 logged out. Waiting for processes to exit. Apr 17 23:44:58.205491 systemd-logind[1683]: Removed session 7. Apr 17 23:44:58.222302 systemd[1]: Started sshd@5-10.0.0.19:22-20.229.252.112:48880.service - OpenSSH per-connection server daemon (20.229.252.112:48880). Apr 17 23:44:58.346740 sshd[2186]: Accepted publickey for core from 20.229.252.112 port 48880 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:44:58.347644 sshd[2186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:44:58.351646 systemd-logind[1683]: New session 8 of user core. Apr 17 23:44:58.357876 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 17 23:44:58.440153 sudo[2190]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 17 23:44:58.440971 sudo[2190]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:44:58.444413 sudo[2190]: pam_unix(sudo:session): session closed for user root Apr 17 23:44:58.449537 sudo[2189]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 17 23:44:58.449950 sudo[2189]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:44:58.464242 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 17 23:44:58.467478 auditctl[2193]: No rules Apr 17 23:44:58.465915 systemd[1]: audit-rules.service: Deactivated successfully. Apr 17 23:44:58.466072 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 17 23:44:58.469050 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:44:58.502287 augenrules[2211]: No rules Apr 17 23:44:58.503739 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:44:58.505087 sudo[2189]: pam_unix(sudo:session): session closed for user root Apr 17 23:44:58.521610 sshd[2186]: pam_unix(sshd:session): session closed for user core Apr 17 23:44:58.524642 systemd[1]: sshd@5-10.0.0.19:22-20.229.252.112:48880.service: Deactivated successfully. Apr 17 23:44:58.526503 systemd[1]: session-8.scope: Deactivated successfully. Apr 17 23:44:58.528141 systemd-logind[1683]: Session 8 logged out. Waiting for processes to exit. Apr 17 23:44:58.529282 systemd-logind[1683]: Removed session 8. Apr 17 23:44:58.551415 systemd[1]: Started sshd@6-10.0.0.19:22-20.229.252.112:48884.service - OpenSSH per-connection server daemon (20.229.252.112:48884). Apr 17 23:44:58.675321 sshd[2219]: Accepted publickey for core from 20.229.252.112 port 48884 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:44:58.675945 sshd[2219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:44:58.681078 systemd-logind[1683]: New session 9 of user core. Apr 17 23:44:58.689913 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 17 23:44:58.772663 sudo[2222]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 17 23:44:58.773052 sudo[2222]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:45:01.192041 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 17 23:45:01.193558 (dockerd)[2238]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 17 23:45:03.849677 dockerd[2238]: time="2026-04-17T23:45:03.848952850Z" level=info msg="Starting up" Apr 17 23:45:05.555083 dockerd[2238]: time="2026-04-17T23:45:05.555028889Z" level=info msg="Loading containers: start." Apr 17 23:45:05.778763 kernel: Initializing XFRM netlink socket Apr 17 23:45:06.068490 systemd-networkd[1329]: docker0: Link UP Apr 17 23:45:06.149943 dockerd[2238]: time="2026-04-17T23:45:06.149898835Z" level=info msg="Loading containers: done." Apr 17 23:45:06.928517 dockerd[2238]: time="2026-04-17T23:45:06.928459070Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 17 23:45:06.929066 dockerd[2238]: time="2026-04-17T23:45:06.928606480Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 17 23:45:06.929066 dockerd[2238]: time="2026-04-17T23:45:06.928793593Z" level=info msg="Daemon has completed initialization" Apr 17 23:45:06.983200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 17 23:45:06.993989 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:45:07.101262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:45:07.105959 (kubelet)[2348]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:45:07.756675 kubelet[2348]: E0417 23:45:07.756617 2348 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:45:07.759120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:45:07.759350 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:45:10.928192 dockerd[2238]: time="2026-04-17T23:45:10.928117159Z" level=info msg="API listen on /run/docker.sock" Apr 17 23:45:10.928803 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 17 23:45:11.477339 containerd[1709]: time="2026-04-17T23:45:11.477286758Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\"" Apr 17 23:45:13.563289 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1464939794.mount: Deactivated successfully. Apr 17 23:45:15.247771 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Apr 17 23:45:17.700733 containerd[1709]: time="2026-04-17T23:45:17.700671172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:17.703390 containerd[1709]: time="2026-04-17T23:45:17.703348804Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.7: active requests=0, bytes read=27100522" Apr 17 23:45:17.706259 containerd[1709]: time="2026-04-17T23:45:17.706227038Z" level=info msg="ImageCreate event name:\"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:17.710555 containerd[1709]: time="2026-04-17T23:45:17.710503689Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:17.711741 containerd[1709]: time="2026-04-17T23:45:17.711540401Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.7\" with image id \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\", size \"27097113\" in 6.234208143s" Apr 17 23:45:17.711741 containerd[1709]: time="2026-04-17T23:45:17.711581002Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\" returns image reference \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\"" Apr 17 23:45:17.712348 containerd[1709]: time="2026-04-17T23:45:17.712324210Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\"" Apr 17 23:45:17.832392 update_engine[1686]: I20260417 23:45:17.832288 1686 update_attempter.cc:509] Updating boot flags... Apr 17 23:45:17.890642 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 17 23:45:17.901792 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:45:17.915772 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (2458) Apr 17 23:45:18.072818 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (2457) Apr 17 23:45:18.806002 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:45:18.810693 (kubelet)[2520]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:45:18.847830 kubelet[2520]: E0417 23:45:18.847778 2520 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:45:18.850039 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:45:18.850261 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:45:20.097326 containerd[1709]: time="2026-04-17T23:45:20.097264267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:20.099739 containerd[1709]: time="2026-04-17T23:45:20.099537194Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.7: active requests=0, bytes read=21252746" Apr 17 23:45:20.102722 containerd[1709]: time="2026-04-17T23:45:20.102654031Z" level=info msg="ImageCreate event name:\"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:20.107363 containerd[1709]: time="2026-04-17T23:45:20.107289686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:20.108369 containerd[1709]: time="2026-04-17T23:45:20.108333499Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.7\" with image id \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\", size \"22819085\" in 2.395887586s" Apr 17 23:45:20.108613 containerd[1709]: time="2026-04-17T23:45:20.108496001Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\" returns image reference \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\"" Apr 17 23:45:20.109115 containerd[1709]: time="2026-04-17T23:45:20.109077907Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\"" Apr 17 23:45:21.418343 containerd[1709]: time="2026-04-17T23:45:21.418285850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:21.421045 containerd[1709]: time="2026-04-17T23:45:21.420976706Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.7: active requests=0, bytes read=15810899" Apr 17 23:45:21.424129 containerd[1709]: time="2026-04-17T23:45:21.424072971Z" level=info msg="ImageCreate event name:\"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:21.429099 containerd[1709]: time="2026-04-17T23:45:21.429048775Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:21.430275 containerd[1709]: time="2026-04-17T23:45:21.430137598Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.7\" with image id \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\", size \"17377256\" in 1.32101959s" Apr 17 23:45:21.430275 containerd[1709]: time="2026-04-17T23:45:21.430176799Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\" returns image reference \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\"" Apr 17 23:45:21.430940 containerd[1709]: time="2026-04-17T23:45:21.430916615Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\"" Apr 17 23:45:22.536472 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount211398342.mount: Deactivated successfully. Apr 17 23:45:22.908311 containerd[1709]: time="2026-04-17T23:45:22.908172808Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:22.910548 containerd[1709]: time="2026-04-17T23:45:22.910482956Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.7: active requests=0, bytes read=25972962" Apr 17 23:45:22.913771 containerd[1709]: time="2026-04-17T23:45:22.913686523Z" level=info msg="ImageCreate event name:\"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:22.920564 containerd[1709]: time="2026-04-17T23:45:22.920503666Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:22.921409 containerd[1709]: time="2026-04-17T23:45:22.921198081Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.7\" with image id \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\", repo tag \"registry.k8s.io/kube-proxy:v1.34.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\", size \"25971973\" in 1.490165264s" Apr 17 23:45:22.921409 containerd[1709]: time="2026-04-17T23:45:22.921240482Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\" returns image reference \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\"" Apr 17 23:45:22.922155 containerd[1709]: time="2026-04-17T23:45:22.922114400Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 17 23:45:23.635869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1517463504.mount: Deactivated successfully. Apr 17 23:45:25.018397 containerd[1709]: time="2026-04-17T23:45:25.018337679Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:25.020575 containerd[1709]: time="2026-04-17T23:45:25.020365222Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388015" Apr 17 23:45:25.023818 containerd[1709]: time="2026-04-17T23:45:25.023751093Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:25.029026 containerd[1709]: time="2026-04-17T23:45:25.028969902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:25.030177 containerd[1709]: time="2026-04-17T23:45:25.030023124Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.107869523s" Apr 17 23:45:25.030177 containerd[1709]: time="2026-04-17T23:45:25.030066525Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Apr 17 23:45:25.030803 containerd[1709]: time="2026-04-17T23:45:25.030769040Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 17 23:45:25.644289 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3490525257.mount: Deactivated successfully. Apr 17 23:45:25.663559 containerd[1709]: time="2026-04-17T23:45:25.663507328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:25.666392 containerd[1709]: time="2026-04-17T23:45:25.666322965Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321226" Apr 17 23:45:25.669262 containerd[1709]: time="2026-04-17T23:45:25.669201303Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:25.673060 containerd[1709]: time="2026-04-17T23:45:25.673007752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:25.673757 containerd[1709]: time="2026-04-17T23:45:25.673721462Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 642.794818ms" Apr 17 23:45:25.673850 containerd[1709]: time="2026-04-17T23:45:25.673762862Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 17 23:45:25.674653 containerd[1709]: time="2026-04-17T23:45:25.674449571Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 17 23:45:26.307961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2670348657.mount: Deactivated successfully. Apr 17 23:45:28.981694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 17 23:45:28.987949 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:45:29.092165 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:45:29.101067 (kubelet)[2616]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:45:29.137219 kubelet[2616]: E0417 23:45:29.137108 2616 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:45:29.139483 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:45:29.139740 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:45:39.231791 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 17 23:45:39.236941 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:45:42.995647 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:45:43.000864 (kubelet)[2634]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:45:43.038400 kubelet[2634]: E0417 23:45:43.038321 2634 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:45:43.040748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:45:43.040971 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:45:46.633669 containerd[1709]: time="2026-04-17T23:45:46.633603382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:46.635904 containerd[1709]: time="2026-04-17T23:45:46.635681709Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22874825" Apr 17 23:45:46.639966 containerd[1709]: time="2026-04-17T23:45:46.639502860Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:46.644050 containerd[1709]: time="2026-04-17T23:45:46.644010919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:46.645134 containerd[1709]: time="2026-04-17T23:45:46.645094634Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 20.970612162s" Apr 17 23:45:46.645232 containerd[1709]: time="2026-04-17T23:45:46.645140534Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Apr 17 23:45:48.594418 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:45:48.600032 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:45:48.649041 systemd[1]: Reloading requested from client PID 2719 ('systemctl') (unit session-9.scope)... Apr 17 23:45:48.649241 systemd[1]: Reloading... Apr 17 23:45:48.796744 zram_generator::config[2762]: No configuration found. Apr 17 23:45:48.915912 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:45:48.997817 systemd[1]: Reloading finished in 347 ms. Apr 17 23:45:49.053768 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:45:49.057983 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:45:49.061264 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 23:45:49.061506 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:45:49.067001 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:45:49.389126 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:45:49.403101 (kubelet)[2831]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:45:49.438342 kubelet[2831]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 23:45:49.438342 kubelet[2831]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:45:49.438801 kubelet[2831]: I0417 23:45:49.438411 2831 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 23:45:49.854800 kubelet[2831]: I0417 23:45:49.854753 2831 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 17 23:45:49.854800 kubelet[2831]: I0417 23:45:49.854785 2831 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:45:49.854800 kubelet[2831]: I0417 23:45:49.854815 2831 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 17 23:45:49.855023 kubelet[2831]: I0417 23:45:49.854822 2831 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:45:49.855121 kubelet[2831]: I0417 23:45:49.855100 2831 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 23:45:50.253861 kubelet[2831]: E0417 23:45:50.253786 2831 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 23:45:50.254363 kubelet[2831]: I0417 23:45:50.254333 2831 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:45:50.263061 kubelet[2831]: E0417 23:45:50.263025 2831 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:45:50.263168 kubelet[2831]: I0417 23:45:50.263090 2831 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 17 23:45:50.266830 kubelet[2831]: I0417 23:45:50.266809 2831 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 17 23:45:50.268117 kubelet[2831]: I0417 23:45:50.268080 2831 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:45:50.268299 kubelet[2831]: I0417 23:45:50.268119 2831 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-7251cc3c8a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 23:45:50.268499 kubelet[2831]: I0417 23:45:50.268300 2831 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 23:45:50.268499 kubelet[2831]: I0417 23:45:50.268314 2831 container_manager_linux.go:306] "Creating device plugin manager" Apr 17 23:45:50.268499 kubelet[2831]: I0417 23:45:50.268437 2831 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 17 23:45:50.273671 kubelet[2831]: I0417 23:45:50.273652 2831 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:45:50.273844 kubelet[2831]: I0417 23:45:50.273826 2831 kubelet.go:475] "Attempting to sync node with API server" Apr 17 23:45:50.273902 kubelet[2831]: I0417 23:45:50.273846 2831 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:45:50.273902 kubelet[2831]: I0417 23:45:50.273872 2831 kubelet.go:387] "Adding apiserver pod source" Apr 17 23:45:50.273902 kubelet[2831]: I0417 23:45:50.273889 2831 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:45:50.276225 kubelet[2831]: E0417 23:45:50.275997 2831 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 23:45:50.276507 kubelet[2831]: E0417 23:45:50.276442 2831 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-7251cc3c8a&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 23:45:50.277877 kubelet[2831]: I0417 23:45:50.276663 2831 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:45:50.277877 kubelet[2831]: I0417 23:45:50.277316 2831 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:45:50.277877 kubelet[2831]: I0417 23:45:50.277358 2831 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 17 23:45:50.277877 kubelet[2831]: W0417 23:45:50.277417 2831 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 17 23:45:50.280731 kubelet[2831]: I0417 23:45:50.280438 2831 server.go:1262] "Started kubelet" Apr 17 23:45:50.283297 kubelet[2831]: I0417 23:45:50.283045 2831 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:45:50.284793 kubelet[2831]: I0417 23:45:50.283781 2831 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:45:50.284793 kubelet[2831]: I0417 23:45:50.283849 2831 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 17 23:45:50.284793 kubelet[2831]: I0417 23:45:50.284174 2831 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:45:50.284945 kubelet[2831]: I0417 23:45:50.284911 2831 server.go:310] "Adding debug handlers to kubelet server" Apr 17 23:45:50.287340 kubelet[2831]: I0417 23:45:50.287313 2831 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 23:45:50.290524 kubelet[2831]: E0417 23:45:50.288518 2831 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.19:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.19:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-7251cc3c8a.18a749a4d2c2bb65 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-7251cc3c8a,UID:ci-4081.3.6-n-7251cc3c8a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-7251cc3c8a,},FirstTimestamp:2026-04-17 23:45:50.280391525 +0000 UTC m=+0.873741960,LastTimestamp:2026-04-17 23:45:50.280391525 +0000 UTC m=+0.873741960,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-7251cc3c8a,}" Apr 17 23:45:50.290524 kubelet[2831]: I0417 23:45:50.290119 2831 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:45:50.294326 kubelet[2831]: E0417 23:45:50.294296 2831 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-7251cc3c8a\" not found" Apr 17 23:45:50.294403 kubelet[2831]: I0417 23:45:50.294339 2831 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 17 23:45:50.294581 kubelet[2831]: I0417 23:45:50.294532 2831 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 17 23:45:50.294649 kubelet[2831]: I0417 23:45:50.294593 2831 reconciler.go:29] "Reconciler: start to sync state" Apr 17 23:45:50.295015 kubelet[2831]: E0417 23:45:50.294988 2831 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 23:45:50.295301 kubelet[2831]: E0417 23:45:50.295266 2831 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-7251cc3c8a?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="200ms" Apr 17 23:45:50.296668 kubelet[2831]: I0417 23:45:50.296645 2831 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:45:50.296770 kubelet[2831]: I0417 23:45:50.296748 2831 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:45:50.298736 kubelet[2831]: I0417 23:45:50.298577 2831 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:45:50.307624 kubelet[2831]: E0417 23:45:50.307597 2831 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 23:45:50.333271 kubelet[2831]: I0417 23:45:50.333213 2831 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 17 23:45:50.334621 kubelet[2831]: I0417 23:45:50.334567 2831 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 17 23:45:50.334621 kubelet[2831]: I0417 23:45:50.334594 2831 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 17 23:45:50.334621 kubelet[2831]: I0417 23:45:50.334620 2831 kubelet.go:2428] "Starting kubelet main sync loop" Apr 17 23:45:50.334898 kubelet[2831]: E0417 23:45:50.334662 2831 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:45:50.336914 kubelet[2831]: E0417 23:45:50.336701 2831 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 23:45:50.351018 kubelet[2831]: I0417 23:45:50.350866 2831 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 23:45:50.351018 kubelet[2831]: I0417 23:45:50.351015 2831 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 23:45:50.351658 kubelet[2831]: I0417 23:45:50.351036 2831 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:45:50.357174 kubelet[2831]: I0417 23:45:50.357143 2831 policy_none.go:49] "None policy: Start" Apr 17 23:45:50.357174 kubelet[2831]: I0417 23:45:50.357171 2831 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 17 23:45:50.357305 kubelet[2831]: I0417 23:45:50.357185 2831 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 17 23:45:50.361259 kubelet[2831]: I0417 23:45:50.361234 2831 policy_none.go:47] "Start" Apr 17 23:45:50.365519 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 17 23:45:50.373823 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 17 23:45:50.377108 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 17 23:45:50.388769 kubelet[2831]: E0417 23:45:50.388434 2831 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:45:50.388863 kubelet[2831]: I0417 23:45:50.388818 2831 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 23:45:50.388863 kubelet[2831]: I0417 23:45:50.388833 2831 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:45:50.389201 kubelet[2831]: I0417 23:45:50.389176 2831 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 23:45:50.390784 kubelet[2831]: E0417 23:45:50.390666 2831 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:45:50.390866 kubelet[2831]: E0417 23:45:50.390834 2831 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-7251cc3c8a\" not found" Apr 17 23:45:50.448114 systemd[1]: Created slice kubepods-burstable-pod5a0746854aba60f90273254e390daa47.slice - libcontainer container kubepods-burstable-pod5a0746854aba60f90273254e390daa47.slice. Apr 17 23:45:50.455623 kubelet[2831]: E0417 23:45:50.455532 2831 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-7251cc3c8a\" not found" node="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:50.460896 systemd[1]: Created slice kubepods-burstable-pod3766451a169af2aa8a197ec67b5616df.slice - libcontainer container kubepods-burstable-pod3766451a169af2aa8a197ec67b5616df.slice. Apr 17 23:45:50.463345 kubelet[2831]: E0417 23:45:50.463293 2831 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-7251cc3c8a\" not found" node="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:50.465634 systemd[1]: Created slice kubepods-burstable-podffc11f3676b0c5ba7a0257d7d0cedbd9.slice - libcontainer container kubepods-burstable-podffc11f3676b0c5ba7a0257d7d0cedbd9.slice. Apr 17 23:45:50.467463 kubelet[2831]: E0417 23:45:50.467435 2831 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-7251cc3c8a\" not found" node="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:50.491645 kubelet[2831]: I0417 23:45:50.491615 2831 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:50.492134 kubelet[2831]: E0417 23:45:50.492101 2831 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:50.495658 kubelet[2831]: E0417 23:45:50.495628 2831 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-7251cc3c8a?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="400ms" Apr 17 23:45:50.596239 kubelet[2831]: I0417 23:45:50.596090 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5a0746854aba60f90273254e390daa47-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-7251cc3c8a\" (UID: \"5a0746854aba60f90273254e390daa47\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:50.596239 kubelet[2831]: I0417 23:45:50.596137 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3766451a169af2aa8a197ec67b5616df-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-7251cc3c8a\" (UID: \"3766451a169af2aa8a197ec67b5616df\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:50.596239 kubelet[2831]: I0417 23:45:50.596167 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ffc11f3676b0c5ba7a0257d7d0cedbd9-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-7251cc3c8a\" (UID: \"ffc11f3676b0c5ba7a0257d7d0cedbd9\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:50.596239 kubelet[2831]: I0417 23:45:50.596194 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ffc11f3676b0c5ba7a0257d7d0cedbd9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-7251cc3c8a\" (UID: \"ffc11f3676b0c5ba7a0257d7d0cedbd9\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:50.596239 kubelet[2831]: I0417 23:45:50.596241 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3766451a169af2aa8a197ec67b5616df-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-7251cc3c8a\" (UID: \"3766451a169af2aa8a197ec67b5616df\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:50.596515 kubelet[2831]: I0417 23:45:50.596277 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3766451a169af2aa8a197ec67b5616df-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-7251cc3c8a\" (UID: \"3766451a169af2aa8a197ec67b5616df\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:50.596515 kubelet[2831]: I0417 23:45:50.596300 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ffc11f3676b0c5ba7a0257d7d0cedbd9-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-7251cc3c8a\" (UID: \"ffc11f3676b0c5ba7a0257d7d0cedbd9\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:50.596515 kubelet[2831]: I0417 23:45:50.596317 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ffc11f3676b0c5ba7a0257d7d0cedbd9-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-7251cc3c8a\" (UID: \"ffc11f3676b0c5ba7a0257d7d0cedbd9\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:50.596515 kubelet[2831]: I0417 23:45:50.596339 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ffc11f3676b0c5ba7a0257d7d0cedbd9-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-7251cc3c8a\" (UID: \"ffc11f3676b0c5ba7a0257d7d0cedbd9\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:50.693988 kubelet[2831]: I0417 23:45:50.693951 2831 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:50.694401 kubelet[2831]: E0417 23:45:50.694369 2831 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:50.764226 containerd[1709]: time="2026-04-17T23:45:50.764179225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-7251cc3c8a,Uid:5a0746854aba60f90273254e390daa47,Namespace:kube-system,Attempt:0,}" Apr 17 23:45:50.769330 containerd[1709]: time="2026-04-17T23:45:50.769277893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-7251cc3c8a,Uid:3766451a169af2aa8a197ec67b5616df,Namespace:kube-system,Attempt:0,}" Apr 17 23:45:50.774746 containerd[1709]: time="2026-04-17T23:45:50.774696864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-7251cc3c8a,Uid:ffc11f3676b0c5ba7a0257d7d0cedbd9,Namespace:kube-system,Attempt:0,}" Apr 17 23:45:50.897064 kubelet[2831]: E0417 23:45:50.896949 2831 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-7251cc3c8a?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="800ms" Apr 17 23:45:51.096104 kubelet[2831]: I0417 23:45:51.096047 2831 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:51.096439 kubelet[2831]: E0417 23:45:51.096406 2831 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:51.113151 kubelet[2831]: E0417 23:45:51.113113 2831 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-7251cc3c8a&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 23:45:51.426438 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1416260008.mount: Deactivated successfully. Apr 17 23:45:51.450580 containerd[1709]: time="2026-04-17T23:45:51.449919097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:45:51.452402 containerd[1709]: time="2026-04-17T23:45:51.452361029Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:45:51.454554 containerd[1709]: time="2026-04-17T23:45:51.454509258Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Apr 17 23:45:51.456435 containerd[1709]: time="2026-04-17T23:45:51.456389683Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:45:51.459398 containerd[1709]: time="2026-04-17T23:45:51.459365522Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:45:51.462145 containerd[1709]: time="2026-04-17T23:45:51.462106758Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:45:51.464382 containerd[1709]: time="2026-04-17T23:45:51.464129085Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:45:51.467494 containerd[1709]: time="2026-04-17T23:45:51.467462629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:45:51.468329 containerd[1709]: time="2026-04-17T23:45:51.468295140Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 704.035514ms" Apr 17 23:45:51.470395 containerd[1709]: time="2026-04-17T23:45:51.470357267Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 695.582902ms" Apr 17 23:45:51.470996 containerd[1709]: time="2026-04-17T23:45:51.470963175Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 701.614481ms" Apr 17 23:45:51.673998 kubelet[2831]: E0417 23:45:51.673955 2831 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 23:45:51.697660 kubelet[2831]: E0417 23:45:51.697531 2831 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-7251cc3c8a?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="1.6s" Apr 17 23:45:51.761980 kubelet[2831]: E0417 23:45:51.761928 2831 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 23:45:51.766915 kubelet[2831]: E0417 23:45:51.766878 2831 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 23:45:51.898782 kubelet[2831]: I0417 23:45:51.898751 2831 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:51.899153 kubelet[2831]: E0417 23:45:51.899113 2831 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:52.299429 kubelet[2831]: E0417 23:45:52.298502 2831 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 23:45:52.329908 containerd[1709]: time="2026-04-17T23:45:52.329781237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:45:52.330662 containerd[1709]: time="2026-04-17T23:45:52.330076441Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:45:52.330662 containerd[1709]: time="2026-04-17T23:45:52.330159142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:45:52.330662 containerd[1709]: time="2026-04-17T23:45:52.330570647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:45:52.332613 containerd[1709]: time="2026-04-17T23:45:52.332172468Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:45:52.332613 containerd[1709]: time="2026-04-17T23:45:52.332233869Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:45:52.332613 containerd[1709]: time="2026-04-17T23:45:52.332275770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:45:52.332613 containerd[1709]: time="2026-04-17T23:45:52.332427772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:45:52.334666 containerd[1709]: time="2026-04-17T23:45:52.334256096Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:45:52.334666 containerd[1709]: time="2026-04-17T23:45:52.334347497Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:45:52.334666 containerd[1709]: time="2026-04-17T23:45:52.334369997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:45:52.334666 containerd[1709]: time="2026-04-17T23:45:52.334483899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:45:52.371883 systemd[1]: Started cri-containerd-013c71f71ddfc7acaf8c7a81bb239f00d04302e8a378140245d1724ad847280c.scope - libcontainer container 013c71f71ddfc7acaf8c7a81bb239f00d04302e8a378140245d1724ad847280c. Apr 17 23:45:52.374152 systemd[1]: Started cri-containerd-f1cc0ce26cceade5a83aaf0269af47b8a9a65891879ed7e1541515c1345135e5.scope - libcontainer container f1cc0ce26cceade5a83aaf0269af47b8a9a65891879ed7e1541515c1345135e5. Apr 17 23:45:52.384115 systemd[1]: Started cri-containerd-b034c93fb62f506f1aed8a9388d4749b4a37ab143fdeb4cf1767d2b2a867c40d.scope - libcontainer container b034c93fb62f506f1aed8a9388d4749b4a37ab143fdeb4cf1767d2b2a867c40d. Apr 17 23:45:52.448121 containerd[1709]: time="2026-04-17T23:45:52.448079502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-7251cc3c8a,Uid:5a0746854aba60f90273254e390daa47,Namespace:kube-system,Attempt:0,} returns sandbox id \"013c71f71ddfc7acaf8c7a81bb239f00d04302e8a378140245d1724ad847280c\"" Apr 17 23:45:52.479693 containerd[1709]: time="2026-04-17T23:45:52.479504017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-7251cc3c8a,Uid:3766451a169af2aa8a197ec67b5616df,Namespace:kube-system,Attempt:0,} returns sandbox id \"b034c93fb62f506f1aed8a9388d4749b4a37ab143fdeb4cf1767d2b2a867c40d\"" Apr 17 23:45:52.484104 containerd[1709]: time="2026-04-17T23:45:52.484071278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-7251cc3c8a,Uid:ffc11f3676b0c5ba7a0257d7d0cedbd9,Namespace:kube-system,Attempt:0,} returns sandbox id \"f1cc0ce26cceade5a83aaf0269af47b8a9a65891879ed7e1541515c1345135e5\"" Apr 17 23:45:52.535281 containerd[1709]: time="2026-04-17T23:45:52.535238855Z" level=info msg="CreateContainer within sandbox \"013c71f71ddfc7acaf8c7a81bb239f00d04302e8a378140245d1724ad847280c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 17 23:45:52.539991 containerd[1709]: time="2026-04-17T23:45:52.539934817Z" level=info msg="CreateContainer within sandbox \"b034c93fb62f506f1aed8a9388d4749b4a37ab143fdeb4cf1767d2b2a867c40d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 17 23:45:52.545043 containerd[1709]: time="2026-04-17T23:45:52.545012684Z" level=info msg="CreateContainer within sandbox \"f1cc0ce26cceade5a83aaf0269af47b8a9a65891879ed7e1541515c1345135e5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 17 23:45:52.570383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount862328963.mount: Deactivated successfully. Apr 17 23:45:52.603558 containerd[1709]: time="2026-04-17T23:45:52.603508358Z" level=info msg="CreateContainer within sandbox \"b034c93fb62f506f1aed8a9388d4749b4a37ab143fdeb4cf1767d2b2a867c40d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"60a4649738551dd44c116b471d81a1a7e6b6014d8ac97a3354ff9fee88717d62\"" Apr 17 23:45:52.607324 containerd[1709]: time="2026-04-17T23:45:52.606914803Z" level=info msg="StartContainer for \"60a4649738551dd44c116b471d81a1a7e6b6014d8ac97a3354ff9fee88717d62\"" Apr 17 23:45:52.611436 containerd[1709]: time="2026-04-17T23:45:52.611299061Z" level=info msg="CreateContainer within sandbox \"f1cc0ce26cceade5a83aaf0269af47b8a9a65891879ed7e1541515c1345135e5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"00b62d88ff4c04d2d5033d75b3933728efd49882e7b524af222fbf53309a555c\"" Apr 17 23:45:52.612529 containerd[1709]: time="2026-04-17T23:45:52.612187473Z" level=info msg="StartContainer for \"00b62d88ff4c04d2d5033d75b3933728efd49882e7b524af222fbf53309a555c\"" Apr 17 23:45:52.614007 containerd[1709]: time="2026-04-17T23:45:52.613972996Z" level=info msg="CreateContainer within sandbox \"013c71f71ddfc7acaf8c7a81bb239f00d04302e8a378140245d1724ad847280c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cb5adf659117fb8d5e47fc74db16395a13b3c9d6527716395d3f1e917f10251d\"" Apr 17 23:45:52.615762 containerd[1709]: time="2026-04-17T23:45:52.614571104Z" level=info msg="StartContainer for \"cb5adf659117fb8d5e47fc74db16395a13b3c9d6527716395d3f1e917f10251d\"" Apr 17 23:45:52.649922 systemd[1]: Started cri-containerd-60a4649738551dd44c116b471d81a1a7e6b6014d8ac97a3354ff9fee88717d62.scope - libcontainer container 60a4649738551dd44c116b471d81a1a7e6b6014d8ac97a3354ff9fee88717d62. Apr 17 23:45:52.666927 systemd[1]: Started cri-containerd-cb5adf659117fb8d5e47fc74db16395a13b3c9d6527716395d3f1e917f10251d.scope - libcontainer container cb5adf659117fb8d5e47fc74db16395a13b3c9d6527716395d3f1e917f10251d. Apr 17 23:45:52.673344 systemd[1]: Started cri-containerd-00b62d88ff4c04d2d5033d75b3933728efd49882e7b524af222fbf53309a555c.scope - libcontainer container 00b62d88ff4c04d2d5033d75b3933728efd49882e7b524af222fbf53309a555c. Apr 17 23:45:52.748214 containerd[1709]: time="2026-04-17T23:45:52.748115471Z" level=info msg="StartContainer for \"60a4649738551dd44c116b471d81a1a7e6b6014d8ac97a3354ff9fee88717d62\" returns successfully" Apr 17 23:45:52.759410 containerd[1709]: time="2026-04-17T23:45:52.759198317Z" level=info msg="StartContainer for \"00b62d88ff4c04d2d5033d75b3933728efd49882e7b524af222fbf53309a555c\" returns successfully" Apr 17 23:45:52.780902 containerd[1709]: time="2026-04-17T23:45:52.780869605Z" level=info msg="StartContainer for \"cb5adf659117fb8d5e47fc74db16395a13b3c9d6527716395d3f1e917f10251d\" returns successfully" Apr 17 23:45:53.349692 kubelet[2831]: E0417 23:45:53.348979 2831 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-7251cc3c8a\" not found" node="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:53.354379 kubelet[2831]: E0417 23:45:53.354280 2831 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-7251cc3c8a\" not found" node="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:53.355564 kubelet[2831]: E0417 23:45:53.355531 2831 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-7251cc3c8a\" not found" node="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:53.422318 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1946376400.mount: Deactivated successfully. Apr 17 23:45:53.501385 kubelet[2831]: I0417 23:45:53.501351 2831 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:54.276801 kubelet[2831]: I0417 23:45:54.276748 2831 apiserver.go:52] "Watching apiserver" Apr 17 23:45:54.284835 kubelet[2831]: E0417 23:45:54.284796 2831 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-7251cc3c8a\" not found" node="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:54.295012 kubelet[2831]: I0417 23:45:54.294973 2831 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 17 23:45:54.358841 kubelet[2831]: E0417 23:45:54.358114 2831 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-7251cc3c8a\" not found" node="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:54.358841 kubelet[2831]: E0417 23:45:54.358592 2831 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-7251cc3c8a\" not found" node="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:54.358841 kubelet[2831]: E0417 23:45:54.358625 2831 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-7251cc3c8a\" not found" node="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:54.459131 kubelet[2831]: I0417 23:45:54.459090 2831 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:54.495286 kubelet[2831]: I0417 23:45:54.495213 2831 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:54.554017 kubelet[2831]: E0417 23:45:54.553794 2831 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-7251cc3c8a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:54.554017 kubelet[2831]: I0417 23:45:54.553820 2831 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:54.555902 kubelet[2831]: E0417 23:45:54.555870 2831 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-7251cc3c8a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:54.555902 kubelet[2831]: I0417 23:45:54.555903 2831 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:54.557424 kubelet[2831]: E0417 23:45:54.557388 2831 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-7251cc3c8a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:55.717453 kubelet[2831]: I0417 23:45:55.717350 2831 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:55.725673 kubelet[2831]: I0417 23:45:55.725601 2831 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 17 23:45:56.082541 kubelet[2831]: I0417 23:45:56.082498 2831 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:56.095656 kubelet[2831]: I0417 23:45:56.095371 2831 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 17 23:45:56.634788 systemd[1]: Reloading requested from client PID 3118 ('systemctl') (unit session-9.scope)... Apr 17 23:45:56.634808 systemd[1]: Reloading... Apr 17 23:45:56.725864 zram_generator::config[3154]: No configuration found. Apr 17 23:45:56.863429 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:45:56.957126 systemd[1]: Reloading finished in 321 ms. Apr 17 23:45:57.000517 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:45:57.014035 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 23:45:57.014305 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:45:57.020960 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:45:57.135160 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:45:57.149112 (kubelet)[3225]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:45:57.205920 kubelet[3225]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 23:45:57.205920 kubelet[3225]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:45:57.205920 kubelet[3225]: I0417 23:45:57.205190 3225 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 23:45:57.220811 kubelet[3225]: I0417 23:45:57.217862 3225 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 17 23:45:57.220811 kubelet[3225]: I0417 23:45:57.217890 3225 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:45:57.220811 kubelet[3225]: I0417 23:45:57.217919 3225 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 17 23:45:57.220811 kubelet[3225]: I0417 23:45:57.217932 3225 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:45:57.220811 kubelet[3225]: I0417 23:45:57.218230 3225 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 23:45:57.220811 kubelet[3225]: I0417 23:45:57.219852 3225 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 17 23:45:57.224831 kubelet[3225]: I0417 23:45:57.223657 3225 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:45:57.229742 kubelet[3225]: E0417 23:45:57.228120 3225 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:45:57.229742 kubelet[3225]: I0417 23:45:57.228164 3225 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 17 23:45:57.234587 kubelet[3225]: I0417 23:45:57.234563 3225 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 17 23:45:57.234997 kubelet[3225]: I0417 23:45:57.234971 3225 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:45:57.235456 kubelet[3225]: I0417 23:45:57.235086 3225 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-7251cc3c8a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 23:45:57.235607 kubelet[3225]: I0417 23:45:57.235597 3225 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 23:45:57.235676 kubelet[3225]: I0417 23:45:57.235669 3225 container_manager_linux.go:306] "Creating device plugin manager" Apr 17 23:45:57.235791 kubelet[3225]: I0417 23:45:57.235779 3225 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 17 23:45:57.236860 kubelet[3225]: I0417 23:45:57.236839 3225 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:45:57.238989 kubelet[3225]: I0417 23:45:57.238971 3225 kubelet.go:475] "Attempting to sync node with API server" Apr 17 23:45:57.239109 kubelet[3225]: I0417 23:45:57.239096 3225 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:45:57.239205 kubelet[3225]: I0417 23:45:57.239196 3225 kubelet.go:387] "Adding apiserver pod source" Apr 17 23:45:57.239280 kubelet[3225]: I0417 23:45:57.239271 3225 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:45:57.253147 kubelet[3225]: I0417 23:45:57.253121 3225 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:45:57.257949 kubelet[3225]: I0417 23:45:57.257924 3225 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:45:57.258078 kubelet[3225]: I0417 23:45:57.258068 3225 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 17 23:45:57.262937 kubelet[3225]: I0417 23:45:57.262920 3225 server.go:1262] "Started kubelet" Apr 17 23:45:57.275011 kubelet[3225]: I0417 23:45:57.263145 3225 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:45:57.275179 kubelet[3225]: I0417 23:45:57.275156 3225 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 17 23:45:57.279252 kubelet[3225]: I0417 23:45:57.279222 3225 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:45:57.279424 kubelet[3225]: I0417 23:45:57.269165 3225 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:45:57.281269 kubelet[3225]: I0417 23:45:57.268689 3225 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 23:45:57.281547 kubelet[3225]: I0417 23:45:57.268758 3225 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:45:57.284499 kubelet[3225]: I0417 23:45:57.284474 3225 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 17 23:45:57.285723 kubelet[3225]: E0417 23:45:57.284859 3225 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-7251cc3c8a\" not found" Apr 17 23:45:57.286424 kubelet[3225]: I0417 23:45:57.286403 3225 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 17 23:45:57.287730 kubelet[3225]: I0417 23:45:57.286531 3225 reconciler.go:29] "Reconciler: start to sync state" Apr 17 23:45:57.291652 kubelet[3225]: I0417 23:45:57.291628 3225 server.go:310] "Adding debug handlers to kubelet server" Apr 17 23:45:57.302308 kubelet[3225]: I0417 23:45:57.302280 3225 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:45:57.302857 kubelet[3225]: I0417 23:45:57.302550 3225 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:45:57.307082 kubelet[3225]: I0417 23:45:57.306625 3225 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:45:57.325914 kubelet[3225]: I0417 23:45:57.325880 3225 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 17 23:45:57.327462 kubelet[3225]: I0417 23:45:57.327440 3225 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 17 23:45:57.327911 kubelet[3225]: I0417 23:45:57.327557 3225 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 17 23:45:57.327911 kubelet[3225]: I0417 23:45:57.327587 3225 kubelet.go:2428] "Starting kubelet main sync loop" Apr 17 23:45:57.327911 kubelet[3225]: E0417 23:45:57.327637 3225 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:45:57.392926 kubelet[3225]: I0417 23:45:57.392883 3225 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 23:45:57.392926 kubelet[3225]: I0417 23:45:57.392904 3225 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 23:45:57.392926 kubelet[3225]: I0417 23:45:57.392925 3225 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:45:57.393153 kubelet[3225]: I0417 23:45:57.393077 3225 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 17 23:45:57.393153 kubelet[3225]: I0417 23:45:57.393091 3225 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 17 23:45:57.393153 kubelet[3225]: I0417 23:45:57.393114 3225 policy_none.go:49] "None policy: Start" Apr 17 23:45:57.393153 kubelet[3225]: I0417 23:45:57.393128 3225 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 17 23:45:57.393153 kubelet[3225]: I0417 23:45:57.393141 3225 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 17 23:45:57.393351 kubelet[3225]: I0417 23:45:57.393252 3225 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 17 23:45:57.393351 kubelet[3225]: I0417 23:45:57.393263 3225 policy_none.go:47] "Start" Apr 17 23:45:57.401009 kubelet[3225]: E0417 23:45:57.400614 3225 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:45:57.402389 kubelet[3225]: I0417 23:45:57.401785 3225 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 23:45:57.402389 kubelet[3225]: I0417 23:45:57.401803 3225 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:45:57.402389 kubelet[3225]: I0417 23:45:57.402196 3225 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 23:45:57.407087 kubelet[3225]: E0417 23:45:57.407063 3225 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:45:57.429660 kubelet[3225]: I0417 23:45:57.429284 3225 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:57.432653 kubelet[3225]: I0417 23:45:57.431981 3225 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:57.432653 kubelet[3225]: I0417 23:45:57.432269 3225 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:57.448175 kubelet[3225]: I0417 23:45:57.448153 3225 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 17 23:45:57.448358 kubelet[3225]: E0417 23:45:57.448345 3225 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-7251cc3c8a\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:57.452278 kubelet[3225]: I0417 23:45:57.452131 3225 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 17 23:45:57.453089 kubelet[3225]: E0417 23:45:57.452493 3225 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-7251cc3c8a\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:57.453089 kubelet[3225]: I0417 23:45:57.452751 3225 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 17 23:45:57.487475 kubelet[3225]: I0417 23:45:57.487434 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ffc11f3676b0c5ba7a0257d7d0cedbd9-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-7251cc3c8a\" (UID: \"ffc11f3676b0c5ba7a0257d7d0cedbd9\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:57.487644 kubelet[3225]: I0417 23:45:57.487527 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ffc11f3676b0c5ba7a0257d7d0cedbd9-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-7251cc3c8a\" (UID: \"ffc11f3676b0c5ba7a0257d7d0cedbd9\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:57.487644 kubelet[3225]: I0417 23:45:57.487562 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ffc11f3676b0c5ba7a0257d7d0cedbd9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-7251cc3c8a\" (UID: \"ffc11f3676b0c5ba7a0257d7d0cedbd9\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:57.487644 kubelet[3225]: I0417 23:45:57.487632 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3766451a169af2aa8a197ec67b5616df-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-7251cc3c8a\" (UID: \"3766451a169af2aa8a197ec67b5616df\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:57.487805 kubelet[3225]: I0417 23:45:57.487745 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ffc11f3676b0c5ba7a0257d7d0cedbd9-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-7251cc3c8a\" (UID: \"ffc11f3676b0c5ba7a0257d7d0cedbd9\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:57.487858 kubelet[3225]: I0417 23:45:57.487779 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5a0746854aba60f90273254e390daa47-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-7251cc3c8a\" (UID: \"5a0746854aba60f90273254e390daa47\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:57.487858 kubelet[3225]: I0417 23:45:57.487847 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3766451a169af2aa8a197ec67b5616df-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-7251cc3c8a\" (UID: \"3766451a169af2aa8a197ec67b5616df\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:57.487939 kubelet[3225]: I0417 23:45:57.487877 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3766451a169af2aa8a197ec67b5616df-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-7251cc3c8a\" (UID: \"3766451a169af2aa8a197ec67b5616df\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:57.488072 kubelet[3225]: I0417 23:45:57.488008 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ffc11f3676b0c5ba7a0257d7d0cedbd9-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-7251cc3c8a\" (UID: \"ffc11f3676b0c5ba7a0257d7d0cedbd9\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:57.516910 kubelet[3225]: I0417 23:45:57.515776 3225 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:57.529840 kubelet[3225]: I0417 23:45:57.529815 3225 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:45:57.530138 kubelet[3225]: I0417 23:45:57.530070 3225 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:46:00.986917 kubelet[3225]: I0417 23:45:58.244345 3225 apiserver.go:52] "Watching apiserver" Apr 17 23:46:00.986917 kubelet[3225]: I0417 23:45:58.286553 3225 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 17 23:46:00.986917 kubelet[3225]: I0417 23:45:58.369100 3225 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-7251cc3c8a" Apr 17 23:46:00.986917 kubelet[3225]: I0417 23:45:58.378491 3225 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 17 23:46:00.986917 kubelet[3225]: E0417 23:45:58.378623 3225 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-7251cc3c8a\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-7251cc3c8a" Apr 17 23:46:00.986917 kubelet[3225]: I0417 23:45:58.390427 3225 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-7251cc3c8a" podStartSLOduration=3.390409215 podStartE2EDuration="3.390409215s" podCreationTimestamp="2026-04-17 23:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:45:58.390109411 +0000 UTC m=+1.236822618" watchObservedRunningTime="2026-04-17 23:45:58.390409215 +0000 UTC m=+1.237122422" Apr 17 23:46:00.986917 kubelet[3225]: I0417 23:45:58.402492 3225 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-7251cc3c8a" podStartSLOduration=1.402472576 podStartE2EDuration="1.402472576s" podCreationTimestamp="2026-04-17 23:45:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:45:58.402198972 +0000 UTC m=+1.248912179" watchObservedRunningTime="2026-04-17 23:45:58.402472576 +0000 UTC m=+1.249185783" Apr 17 23:46:00.987774 kubelet[3225]: I0417 23:45:58.434879 3225 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-7251cc3c8a" podStartSLOduration=2.434862609 podStartE2EDuration="2.434862609s" podCreationTimestamp="2026-04-17 23:45:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:45:58.412841315 +0000 UTC m=+1.259554622" watchObservedRunningTime="2026-04-17 23:45:58.434862609 +0000 UTC m=+1.281575816" Apr 17 23:46:02.914361 kubelet[3225]: I0417 23:46:02.914315 3225 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 17 23:46:02.915037 kubelet[3225]: I0417 23:46:02.915006 3225 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 17 23:46:02.915128 containerd[1709]: time="2026-04-17T23:46:02.914801971Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 17 23:46:03.906165 systemd[1]: Created slice kubepods-besteffort-podc8c00659_c8fd_4d51_91dd_4ff30ca5e680.slice - libcontainer container kubepods-besteffort-podc8c00659_c8fd_4d51_91dd_4ff30ca5e680.slice. Apr 17 23:46:03.929372 kubelet[3225]: I0417 23:46:03.929231 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c8c00659-c8fd-4d51-91dd-4ff30ca5e680-kube-proxy\") pod \"kube-proxy-vb6qv\" (UID: \"c8c00659-c8fd-4d51-91dd-4ff30ca5e680\") " pod="kube-system/kube-proxy-vb6qv" Apr 17 23:46:03.929372 kubelet[3225]: I0417 23:46:03.929282 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c8c00659-c8fd-4d51-91dd-4ff30ca5e680-xtables-lock\") pod \"kube-proxy-vb6qv\" (UID: \"c8c00659-c8fd-4d51-91dd-4ff30ca5e680\") " pod="kube-system/kube-proxy-vb6qv" Apr 17 23:46:03.929372 kubelet[3225]: I0417 23:46:03.929305 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c8c00659-c8fd-4d51-91dd-4ff30ca5e680-lib-modules\") pod \"kube-proxy-vb6qv\" (UID: \"c8c00659-c8fd-4d51-91dd-4ff30ca5e680\") " pod="kube-system/kube-proxy-vb6qv" Apr 17 23:46:03.929372 kubelet[3225]: I0417 23:46:03.929334 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btmgn\" (UniqueName: \"kubernetes.io/projected/c8c00659-c8fd-4d51-91dd-4ff30ca5e680-kube-api-access-btmgn\") pod \"kube-proxy-vb6qv\" (UID: \"c8c00659-c8fd-4d51-91dd-4ff30ca5e680\") " pod="kube-system/kube-proxy-vb6qv" Apr 17 23:46:04.120092 systemd[1]: Created slice kubepods-besteffort-pod362691de_1509_4017_955a_2f7a4b403969.slice - libcontainer container kubepods-besteffort-pod362691de_1509_4017_955a_2f7a4b403969.slice. Apr 17 23:46:04.130312 kubelet[3225]: I0417 23:46:04.130277 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/362691de-1509-4017-955a-2f7a4b403969-var-lib-calico\") pod \"tigera-operator-5588576f44-86qxf\" (UID: \"362691de-1509-4017-955a-2f7a4b403969\") " pod="tigera-operator/tigera-operator-5588576f44-86qxf" Apr 17 23:46:04.130462 kubelet[3225]: I0417 23:46:04.130321 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td2j4\" (UniqueName: \"kubernetes.io/projected/362691de-1509-4017-955a-2f7a4b403969-kube-api-access-td2j4\") pod \"tigera-operator-5588576f44-86qxf\" (UID: \"362691de-1509-4017-955a-2f7a4b403969\") " pod="tigera-operator/tigera-operator-5588576f44-86qxf" Apr 17 23:46:04.225131 containerd[1709]: time="2026-04-17T23:46:04.224995636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vb6qv,Uid:c8c00659-c8fd-4d51-91dd-4ff30ca5e680,Namespace:kube-system,Attempt:0,}" Apr 17 23:46:04.277257 containerd[1709]: time="2026-04-17T23:46:04.276850239Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:46:04.277257 containerd[1709]: time="2026-04-17T23:46:04.276947541Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:46:04.277257 containerd[1709]: time="2026-04-17T23:46:04.276992141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:46:04.277257 containerd[1709]: time="2026-04-17T23:46:04.277081142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:46:04.303855 systemd[1]: Started cri-containerd-8e976b715d411f8c361d4fc78a160b562487c9540fd6e1f31bbfbf9b5495a985.scope - libcontainer container 8e976b715d411f8c361d4fc78a160b562487c9540fd6e1f31bbfbf9b5495a985. Apr 17 23:46:04.327100 containerd[1709]: time="2026-04-17T23:46:04.327026920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vb6qv,Uid:c8c00659-c8fd-4d51-91dd-4ff30ca5e680,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e976b715d411f8c361d4fc78a160b562487c9540fd6e1f31bbfbf9b5495a985\"" Apr 17 23:46:04.345583 containerd[1709]: time="2026-04-17T23:46:04.345538271Z" level=info msg="CreateContainer within sandbox \"8e976b715d411f8c361d4fc78a160b562487c9540fd6e1f31bbfbf9b5495a985\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 17 23:46:04.380803 containerd[1709]: time="2026-04-17T23:46:04.380675447Z" level=info msg="CreateContainer within sandbox \"8e976b715d411f8c361d4fc78a160b562487c9540fd6e1f31bbfbf9b5495a985\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c4ba0ae3ddd11e0a4fb8ce6adfbff14b21b979d3ba99f52595bd057d72720f2a\"" Apr 17 23:46:04.382749 containerd[1709]: time="2026-04-17T23:46:04.381658660Z" level=info msg="StartContainer for \"c4ba0ae3ddd11e0a4fb8ce6adfbff14b21b979d3ba99f52595bd057d72720f2a\"" Apr 17 23:46:04.411169 systemd[1]: Started cri-containerd-c4ba0ae3ddd11e0a4fb8ce6adfbff14b21b979d3ba99f52595bd057d72720f2a.scope - libcontainer container c4ba0ae3ddd11e0a4fb8ce6adfbff14b21b979d3ba99f52595bd057d72720f2a. Apr 17 23:46:04.429550 containerd[1709]: time="2026-04-17T23:46:04.429504109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-86qxf,Uid:362691de-1509-4017-955a-2f7a4b403969,Namespace:tigera-operator,Attempt:0,}" Apr 17 23:46:04.444331 containerd[1709]: time="2026-04-17T23:46:04.444264909Z" level=info msg="StartContainer for \"c4ba0ae3ddd11e0a4fb8ce6adfbff14b21b979d3ba99f52595bd057d72720f2a\" returns successfully" Apr 17 23:46:04.486728 containerd[1709]: time="2026-04-17T23:46:04.486627384Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:46:04.486962 containerd[1709]: time="2026-04-17T23:46:04.486925088Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:46:04.487111 containerd[1709]: time="2026-04-17T23:46:04.487086590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:46:04.487453 containerd[1709]: time="2026-04-17T23:46:04.487397594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:46:04.509908 systemd[1]: Started cri-containerd-32338f1bb5d7a50c14a8abe6e31abcc4983d21efe5795b9ed0394a57c2590e09.scope - libcontainer container 32338f1bb5d7a50c14a8abe6e31abcc4983d21efe5795b9ed0394a57c2590e09. Apr 17 23:46:04.558960 containerd[1709]: time="2026-04-17T23:46:04.558851263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-86qxf,Uid:362691de-1509-4017-955a-2f7a4b403969,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"32338f1bb5d7a50c14a8abe6e31abcc4983d21efe5795b9ed0394a57c2590e09\"" Apr 17 23:46:04.561242 containerd[1709]: time="2026-04-17T23:46:04.561196895Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 17 23:46:05.043632 systemd[1]: run-containerd-runc-k8s.io-8e976b715d411f8c361d4fc78a160b562487c9540fd6e1f31bbfbf9b5495a985-runc.RE4s1S.mount: Deactivated successfully. Apr 17 23:46:05.400750 kubelet[3225]: I0417 23:46:05.400561 3225 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vb6qv" podStartSLOduration=2.400542076 podStartE2EDuration="2.400542076s" podCreationTimestamp="2026-04-17 23:46:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:46:05.400423074 +0000 UTC m=+8.247136281" watchObservedRunningTime="2026-04-17 23:46:05.400542076 +0000 UTC m=+8.247255283" Apr 17 23:46:07.889384 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2529891877.mount: Deactivated successfully. Apr 17 23:46:10.047372 containerd[1709]: time="2026-04-17T23:46:10.047305509Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:46:10.050403 containerd[1709]: time="2026-04-17T23:46:10.050246460Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 17 23:46:10.053661 containerd[1709]: time="2026-04-17T23:46:10.053618319Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:46:10.061418 containerd[1709]: time="2026-04-17T23:46:10.061252652Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:46:10.062654 containerd[1709]: time="2026-04-17T23:46:10.062606976Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 5.50136148s" Apr 17 23:46:10.062654 containerd[1709]: time="2026-04-17T23:46:10.062644876Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 17 23:46:10.070039 containerd[1709]: time="2026-04-17T23:46:10.069995504Z" level=info msg="CreateContainer within sandbox \"32338f1bb5d7a50c14a8abe6e31abcc4983d21efe5795b9ed0394a57c2590e09\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 17 23:46:10.090335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4100689419.mount: Deactivated successfully. Apr 17 23:46:10.097253 containerd[1709]: time="2026-04-17T23:46:10.097205578Z" level=info msg="CreateContainer within sandbox \"32338f1bb5d7a50c14a8abe6e31abcc4983d21efe5795b9ed0394a57c2590e09\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e5e5e39873412de487af9c2e594d16c820f0c901af2e8d156ac9d36c6c01fc8d\"" Apr 17 23:46:10.097951 containerd[1709]: time="2026-04-17T23:46:10.097862489Z" level=info msg="StartContainer for \"e5e5e39873412de487af9c2e594d16c820f0c901af2e8d156ac9d36c6c01fc8d\"" Apr 17 23:46:10.132908 systemd[1]: Started cri-containerd-e5e5e39873412de487af9c2e594d16c820f0c901af2e8d156ac9d36c6c01fc8d.scope - libcontainer container e5e5e39873412de487af9c2e594d16c820f0c901af2e8d156ac9d36c6c01fc8d. Apr 17 23:46:10.162524 containerd[1709]: time="2026-04-17T23:46:10.161916705Z" level=info msg="StartContainer for \"e5e5e39873412de487af9c2e594d16c820f0c901af2e8d156ac9d36c6c01fc8d\" returns successfully" Apr 17 23:46:10.624702 kubelet[3225]: I0417 23:46:10.624221 3225 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-86qxf" podStartSLOduration=1.121274848 podStartE2EDuration="6.624199053s" podCreationTimestamp="2026-04-17 23:46:04 +0000 UTC" firstStartedPulling="2026-04-17 23:46:04.560482185 +0000 UTC m=+7.407195492" lastFinishedPulling="2026-04-17 23:46:10.06340639 +0000 UTC m=+12.910119697" observedRunningTime="2026-04-17 23:46:10.410556034 +0000 UTC m=+13.257269341" watchObservedRunningTime="2026-04-17 23:46:10.624199053 +0000 UTC m=+13.470912360" Apr 17 23:46:16.355608 sudo[2222]: pam_unix(sudo:session): session closed for user root Apr 17 23:46:16.374959 sshd[2219]: pam_unix(sshd:session): session closed for user core Apr 17 23:46:16.378202 systemd-logind[1683]: Session 9 logged out. Waiting for processes to exit. Apr 17 23:46:16.381190 systemd[1]: sshd@6-10.0.0.19:22-20.229.252.112:48884.service: Deactivated successfully. Apr 17 23:46:16.386858 systemd[1]: session-9.scope: Deactivated successfully. Apr 17 23:46:16.387312 systemd[1]: session-9.scope: Consumed 4.715s CPU time, 159.1M memory peak, 0B memory swap peak. Apr 17 23:46:16.389019 systemd-logind[1683]: Removed session 9. Apr 17 23:46:19.567679 systemd[1]: Created slice kubepods-besteffort-pod849da82c_45b2_44b3_8c1d_6d59ba0348e4.slice - libcontainer container kubepods-besteffort-pod849da82c_45b2_44b3_8c1d_6d59ba0348e4.slice. Apr 17 23:46:19.634157 kubelet[3225]: I0417 23:46:19.633965 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/849da82c-45b2-44b3-8c1d-6d59ba0348e4-typha-certs\") pod \"calico-typha-5bb97989d5-5l89g\" (UID: \"849da82c-45b2-44b3-8c1d-6d59ba0348e4\") " pod="calico-system/calico-typha-5bb97989d5-5l89g" Apr 17 23:46:19.634157 kubelet[3225]: I0417 23:46:19.634073 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd2gw\" (UniqueName: \"kubernetes.io/projected/849da82c-45b2-44b3-8c1d-6d59ba0348e4-kube-api-access-cd2gw\") pod \"calico-typha-5bb97989d5-5l89g\" (UID: \"849da82c-45b2-44b3-8c1d-6d59ba0348e4\") " pod="calico-system/calico-typha-5bb97989d5-5l89g" Apr 17 23:46:19.634157 kubelet[3225]: I0417 23:46:19.634104 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/849da82c-45b2-44b3-8c1d-6d59ba0348e4-tigera-ca-bundle\") pod \"calico-typha-5bb97989d5-5l89g\" (UID: \"849da82c-45b2-44b3-8c1d-6d59ba0348e4\") " pod="calico-system/calico-typha-5bb97989d5-5l89g" Apr 17 23:46:19.765649 systemd[1]: Created slice kubepods-besteffort-pod49bc7fff_d2d1_4e82_ba3c_550803a9e8f2.slice - libcontainer container kubepods-besteffort-pod49bc7fff_d2d1_4e82_ba3c_550803a9e8f2.slice. Apr 17 23:46:19.835254 kubelet[3225]: I0417 23:46:19.834553 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/49bc7fff-d2d1-4e82-ba3c-550803a9e8f2-flexvol-driver-host\") pod \"calico-node-989d2\" (UID: \"49bc7fff-d2d1-4e82-ba3c-550803a9e8f2\") " pod="calico-system/calico-node-989d2" Apr 17 23:46:19.835254 kubelet[3225]: I0417 23:46:19.834605 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/49bc7fff-d2d1-4e82-ba3c-550803a9e8f2-bpffs\") pod \"calico-node-989d2\" (UID: \"49bc7fff-d2d1-4e82-ba3c-550803a9e8f2\") " pod="calico-system/calico-node-989d2" Apr 17 23:46:19.835254 kubelet[3225]: I0417 23:46:19.834626 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/49bc7fff-d2d1-4e82-ba3c-550803a9e8f2-cni-log-dir\") pod \"calico-node-989d2\" (UID: \"49bc7fff-d2d1-4e82-ba3c-550803a9e8f2\") " pod="calico-system/calico-node-989d2" Apr 17 23:46:19.835254 kubelet[3225]: I0417 23:46:19.834643 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/49bc7fff-d2d1-4e82-ba3c-550803a9e8f2-cni-net-dir\") pod \"calico-node-989d2\" (UID: \"49bc7fff-d2d1-4e82-ba3c-550803a9e8f2\") " pod="calico-system/calico-node-989d2" Apr 17 23:46:19.835254 kubelet[3225]: I0417 23:46:19.834700 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/49bc7fff-d2d1-4e82-ba3c-550803a9e8f2-policysync\") pod \"calico-node-989d2\" (UID: \"49bc7fff-d2d1-4e82-ba3c-550803a9e8f2\") " pod="calico-system/calico-node-989d2" Apr 17 23:46:19.835565 kubelet[3225]: I0417 23:46:19.834765 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/49bc7fff-d2d1-4e82-ba3c-550803a9e8f2-lib-modules\") pod \"calico-node-989d2\" (UID: \"49bc7fff-d2d1-4e82-ba3c-550803a9e8f2\") " pod="calico-system/calico-node-989d2" Apr 17 23:46:19.835565 kubelet[3225]: I0417 23:46:19.834945 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49bc7fff-d2d1-4e82-ba3c-550803a9e8f2-tigera-ca-bundle\") pod \"calico-node-989d2\" (UID: \"49bc7fff-d2d1-4e82-ba3c-550803a9e8f2\") " pod="calico-system/calico-node-989d2" Apr 17 23:46:19.835565 kubelet[3225]: I0417 23:46:19.834972 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/49bc7fff-d2d1-4e82-ba3c-550803a9e8f2-var-run-calico\") pod \"calico-node-989d2\" (UID: \"49bc7fff-d2d1-4e82-ba3c-550803a9e8f2\") " pod="calico-system/calico-node-989d2" Apr 17 23:46:19.835565 kubelet[3225]: I0417 23:46:19.834994 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hj55\" (UniqueName: \"kubernetes.io/projected/49bc7fff-d2d1-4e82-ba3c-550803a9e8f2-kube-api-access-7hj55\") pod \"calico-node-989d2\" (UID: \"49bc7fff-d2d1-4e82-ba3c-550803a9e8f2\") " pod="calico-system/calico-node-989d2" Apr 17 23:46:19.835565 kubelet[3225]: I0417 23:46:19.835018 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/49bc7fff-d2d1-4e82-ba3c-550803a9e8f2-sys-fs\") pod \"calico-node-989d2\" (UID: \"49bc7fff-d2d1-4e82-ba3c-550803a9e8f2\") " pod="calico-system/calico-node-989d2" Apr 17 23:46:19.837859 kubelet[3225]: I0417 23:46:19.835046 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/49bc7fff-d2d1-4e82-ba3c-550803a9e8f2-var-lib-calico\") pod \"calico-node-989d2\" (UID: \"49bc7fff-d2d1-4e82-ba3c-550803a9e8f2\") " pod="calico-system/calico-node-989d2" Apr 17 23:46:19.837859 kubelet[3225]: I0417 23:46:19.835064 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/49bc7fff-d2d1-4e82-ba3c-550803a9e8f2-node-certs\") pod \"calico-node-989d2\" (UID: \"49bc7fff-d2d1-4e82-ba3c-550803a9e8f2\") " pod="calico-system/calico-node-989d2" Apr 17 23:46:19.837859 kubelet[3225]: I0417 23:46:19.835086 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/49bc7fff-d2d1-4e82-ba3c-550803a9e8f2-xtables-lock\") pod \"calico-node-989d2\" (UID: \"49bc7fff-d2d1-4e82-ba3c-550803a9e8f2\") " pod="calico-system/calico-node-989d2" Apr 17 23:46:19.837859 kubelet[3225]: I0417 23:46:19.835108 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/49bc7fff-d2d1-4e82-ba3c-550803a9e8f2-cni-bin-dir\") pod \"calico-node-989d2\" (UID: \"49bc7fff-d2d1-4e82-ba3c-550803a9e8f2\") " pod="calico-system/calico-node-989d2" Apr 17 23:46:19.837859 kubelet[3225]: I0417 23:46:19.835136 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/49bc7fff-d2d1-4e82-ba3c-550803a9e8f2-nodeproc\") pod \"calico-node-989d2\" (UID: \"49bc7fff-d2d1-4e82-ba3c-550803a9e8f2\") " pod="calico-system/calico-node-989d2" Apr 17 23:46:19.861029 kubelet[3225]: E0417 23:46:19.860969 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vtqds" podUID="d55082e2-e0fa-4118-b796-695fc5437662" Apr 17 23:46:19.878050 containerd[1709]: time="2026-04-17T23:46:19.877998502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5bb97989d5-5l89g,Uid:849da82c-45b2-44b3-8c1d-6d59ba0348e4,Namespace:calico-system,Attempt:0,}" Apr 17 23:46:19.921033 containerd[1709]: time="2026-04-17T23:46:19.920753296Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:46:19.921033 containerd[1709]: time="2026-04-17T23:46:19.920841097Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:46:19.921033 containerd[1709]: time="2026-04-17T23:46:19.920871097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:46:19.921416 containerd[1709]: time="2026-04-17T23:46:19.921135700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:46:19.936171 kubelet[3225]: I0417 23:46:19.935841 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d55082e2-e0fa-4118-b796-695fc5437662-kubelet-dir\") pod \"csi-node-driver-vtqds\" (UID: \"d55082e2-e0fa-4118-b796-695fc5437662\") " pod="calico-system/csi-node-driver-vtqds" Apr 17 23:46:19.936171 kubelet[3225]: I0417 23:46:19.935900 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d55082e2-e0fa-4118-b796-695fc5437662-registration-dir\") pod \"csi-node-driver-vtqds\" (UID: \"d55082e2-e0fa-4118-b796-695fc5437662\") " pod="calico-system/csi-node-driver-vtqds" Apr 17 23:46:19.936171 kubelet[3225]: I0417 23:46:19.935937 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cq27\" (UniqueName: \"kubernetes.io/projected/d55082e2-e0fa-4118-b796-695fc5437662-kube-api-access-2cq27\") pod \"csi-node-driver-vtqds\" (UID: \"d55082e2-e0fa-4118-b796-695fc5437662\") " pod="calico-system/csi-node-driver-vtqds" Apr 17 23:46:19.936171 kubelet[3225]: I0417 23:46:19.935958 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d55082e2-e0fa-4118-b796-695fc5437662-socket-dir\") pod \"csi-node-driver-vtqds\" (UID: \"d55082e2-e0fa-4118-b796-695fc5437662\") " pod="calico-system/csi-node-driver-vtqds" Apr 17 23:46:19.936171 kubelet[3225]: I0417 23:46:19.935995 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d55082e2-e0fa-4118-b796-695fc5437662-varrun\") pod \"csi-node-driver-vtqds\" (UID: \"d55082e2-e0fa-4118-b796-695fc5437662\") " pod="calico-system/csi-node-driver-vtqds" Apr 17 23:46:19.940570 kubelet[3225]: E0417 23:46:19.939407 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.940570 kubelet[3225]: W0417 23:46:19.939429 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.940570 kubelet[3225]: E0417 23:46:19.939555 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.940570 kubelet[3225]: E0417 23:46:19.940391 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.940570 kubelet[3225]: W0417 23:46:19.940409 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.940570 kubelet[3225]: E0417 23:46:19.940531 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.942728 kubelet[3225]: E0417 23:46:19.941246 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.942728 kubelet[3225]: W0417 23:46:19.941359 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.942728 kubelet[3225]: E0417 23:46:19.941378 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.942728 kubelet[3225]: E0417 23:46:19.941942 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.942728 kubelet[3225]: W0417 23:46:19.941956 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.942728 kubelet[3225]: E0417 23:46:19.941970 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.942728 kubelet[3225]: E0417 23:46:19.942289 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.942728 kubelet[3225]: W0417 23:46:19.942300 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.942728 kubelet[3225]: E0417 23:46:19.942313 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.943146 kubelet[3225]: E0417 23:46:19.942799 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.943146 kubelet[3225]: W0417 23:46:19.942811 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.943146 kubelet[3225]: E0417 23:46:19.942825 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.943322 kubelet[3225]: E0417 23:46:19.943305 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.943371 kubelet[3225]: W0417 23:46:19.943323 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.943371 kubelet[3225]: E0417 23:46:19.943345 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.944113 kubelet[3225]: E0417 23:46:19.944010 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.944113 kubelet[3225]: W0417 23:46:19.944030 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.944113 kubelet[3225]: E0417 23:46:19.944044 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.951497 kubelet[3225]: E0417 23:46:19.947857 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.951497 kubelet[3225]: W0417 23:46:19.947885 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.951497 kubelet[3225]: E0417 23:46:19.947901 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.956128 kubelet[3225]: E0417 23:46:19.956000 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.956672 kubelet[3225]: W0417 23:46:19.956638 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.956781 kubelet[3225]: E0417 23:46:19.956678 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.957838 kubelet[3225]: E0417 23:46:19.957455 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.957838 kubelet[3225]: W0417 23:46:19.957473 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.957838 kubelet[3225]: E0417 23:46:19.957490 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.970970 kubelet[3225]: E0417 23:46:19.970935 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:19.970970 kubelet[3225]: W0417 23:46:19.970965 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:19.971138 kubelet[3225]: E0417 23:46:19.970990 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:19.976867 systemd[1]: Started cri-containerd-dd378cabd43c0657bd0c6b9d10414598c9f2ebb63b20fbe0e1f93923a1f6400e.scope - libcontainer container dd378cabd43c0657bd0c6b9d10414598c9f2ebb63b20fbe0e1f93923a1f6400e. Apr 17 23:46:20.018621 containerd[1709]: time="2026-04-17T23:46:20.018575225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5bb97989d5-5l89g,Uid:849da82c-45b2-44b3-8c1d-6d59ba0348e4,Namespace:calico-system,Attempt:0,} returns sandbox id \"dd378cabd43c0657bd0c6b9d10414598c9f2ebb63b20fbe0e1f93923a1f6400e\"" Apr 17 23:46:20.020505 containerd[1709]: time="2026-04-17T23:46:20.020465347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 17 23:46:20.036759 kubelet[3225]: E0417 23:46:20.036732 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:20.036925 kubelet[3225]: W0417 23:46:20.036912 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:20.037001 kubelet[3225]: E0417 23:46:20.036991 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:20.037447 kubelet[3225]: E0417 23:46:20.037429 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:20.037599 kubelet[3225]: W0417 23:46:20.037567 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:20.037764 kubelet[3225]: E0417 23:46:20.037747 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:20.038300 kubelet[3225]: E0417 23:46:20.038231 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:20.038300 kubelet[3225]: W0417 23:46:20.038247 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:20.038300 kubelet[3225]: E0417 23:46:20.038261 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:20.038954 kubelet[3225]: E0417 23:46:20.038868 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:20.038954 kubelet[3225]: W0417 23:46:20.038882 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:20.038954 kubelet[3225]: E0417 23:46:20.038898 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:20.039386 kubelet[3225]: E0417 23:46:20.039277 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:20.039386 kubelet[3225]: W0417 23:46:20.039290 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:20.039386 kubelet[3225]: E0417 23:46:20.039303 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:20.039850 kubelet[3225]: E0417 23:46:20.039649 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:20.039850 kubelet[3225]: W0417 23:46:20.039663 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:20.039850 kubelet[3225]: E0417 23:46:20.039675 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:20.040301 kubelet[3225]: E0417 23:46:20.040189 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:20.040301 kubelet[3225]: W0417 23:46:20.040202 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:20.040301 kubelet[3225]: E0417 23:46:20.040216 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:20.040620 kubelet[3225]: E0417 23:46:20.040493 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:20.040620 kubelet[3225]: W0417 23:46:20.040505 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:20.040620 kubelet[3225]: E0417 23:46:20.040517 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:20.040969 kubelet[3225]: E0417 23:46:20.040897 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:20.040969 kubelet[3225]: W0417 23:46:20.040909 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:20.040969 kubelet[3225]: E0417 23:46:20.040920 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:20.041378 kubelet[3225]: E0417 23:46:20.041301 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:20.041378 kubelet[3225]: W0417 23:46:20.041314 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:20.041378 kubelet[3225]: E0417 23:46:20.041327 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:20.041835 kubelet[3225]: E0417 23:46:20.041687 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:20.041835 kubelet[3225]: W0417 23:46:20.041700 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:20.041835 kubelet[3225]: E0417 23:46:20.041733 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:20.042142 kubelet[3225]: E0417 23:46:20.042040 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:20.042142 kubelet[3225]: W0417 23:46:20.042052 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:20.042142 kubelet[3225]: E0417 23:46:20.042066 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:20.042569 kubelet[3225]: E0417 23:46:20.042556 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:20.042750 kubelet[3225]: W0417 23:46:20.042642 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:20.042750 kubelet[3225]: E0417 23:46:20.042661 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:20.043128 kubelet[3225]: E0417 23:46:20.043010 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:20.043128 kubelet[3225]: W0417 23:46:20.043022 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:20.043128 kubelet[3225]: E0417 23:46:20.043035 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:20.043420 kubelet[3225]: E0417 23:46:20.043345 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:20.043420 kubelet[3225]: W0417 23:46:20.043358 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:20.043420 kubelet[3225]: E0417 23:46:20.043371 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:20.043761 kubelet[3225]: E0417 23:46:20.043749 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:20.043946 kubelet[3225]: W0417 23:46:20.043848 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:20.043946 kubelet[3225]: E0417 23:46:20.043882 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:20.044306 kubelet[3225]: E0417 23:46:20.044202 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:20.044306 kubelet[3225]: W0417 23:46:20.044214 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:20.044306 kubelet[3225]: E0417 23:46:20.044226 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:20.044991 kubelet[3225]: E0417 23:46:20.044879 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:20.044991 kubelet[3225]: W0417 23:46:20.044892 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:20.044991 kubelet[3225]: E0417 23:46:20.044905 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:20.045283 kubelet[3225]: E0417 23:46:20.045211 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:20.045283 kubelet[3225]: W0417 23:46:20.045223 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:20.045283 kubelet[3225]: E0417 23:46:20.045235 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:20.045679 kubelet[3225]: E0417 23:46:20.045561 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:20.045679 kubelet[3225]: W0417 23:46:20.045575 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:20.045679 kubelet[3225]: E0417 23:46:20.045586 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:20.046023 kubelet[3225]: E0417 23:46:20.045906 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:20.046023 kubelet[3225]: W0417 23:46:20.045919 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:20.046023 kubelet[3225]: E0417 23:46:20.045931 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:20.047071 kubelet[3225]: E0417 23:46:20.046818 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:20.047071 kubelet[3225]: W0417 23:46:20.046830 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:20.047071 kubelet[3225]: E0417 23:46:20.046840 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:20.047636 kubelet[3225]: E0417 23:46:20.047572 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:20.047636 kubelet[3225]: W0417 23:46:20.047586 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:20.047636 kubelet[3225]: E0417 23:46:20.047601 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:20.048303 kubelet[3225]: E0417 23:46:20.048136 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:20.048303 kubelet[3225]: W0417 23:46:20.048151 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:20.048303 kubelet[3225]: E0417 23:46:20.048164 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:20.048629 kubelet[3225]: E0417 23:46:20.048579 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:20.048629 kubelet[3225]: W0417 23:46:20.048593 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:20.048817 kubelet[3225]: E0417 23:46:20.048699 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:20.055382 kubelet[3225]: E0417 23:46:20.055361 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:20.055382 kubelet[3225]: W0417 23:46:20.055379 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:20.055496 kubelet[3225]: E0417 23:46:20.055396 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:20.075827 containerd[1709]: time="2026-04-17T23:46:20.075775385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-989d2,Uid:49bc7fff-d2d1-4e82-ba3c-550803a9e8f2,Namespace:calico-system,Attempt:0,}" Apr 17 23:46:20.130903 containerd[1709]: time="2026-04-17T23:46:20.130530217Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:46:20.130903 containerd[1709]: time="2026-04-17T23:46:20.130664319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:46:20.130903 containerd[1709]: time="2026-04-17T23:46:20.130681819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:46:20.133679 containerd[1709]: time="2026-04-17T23:46:20.131390627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:46:20.157898 systemd[1]: Started cri-containerd-4507aeffbb4ec13355467591077af6d7de37c23101c7799f136731cf9aa81386.scope - libcontainer container 4507aeffbb4ec13355467591077af6d7de37c23101c7799f136731cf9aa81386. Apr 17 23:46:20.181456 containerd[1709]: time="2026-04-17T23:46:20.181399804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-989d2,Uid:49bc7fff-d2d1-4e82-ba3c-550803a9e8f2,Namespace:calico-system,Attempt:0,} returns sandbox id \"4507aeffbb4ec13355467591077af6d7de37c23101c7799f136731cf9aa81386\"" Apr 17 23:46:21.330187 kubelet[3225]: E0417 23:46:21.328840 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vtqds" podUID="d55082e2-e0fa-4118-b796-695fc5437662" Apr 17 23:46:21.601119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3366228261.mount: Deactivated successfully. Apr 17 23:46:22.559213 containerd[1709]: time="2026-04-17T23:46:22.559157149Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:46:22.561746 containerd[1709]: time="2026-04-17T23:46:22.561666278Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 17 23:46:22.564394 containerd[1709]: time="2026-04-17T23:46:22.564341309Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:46:22.568186 containerd[1709]: time="2026-04-17T23:46:22.568137653Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:46:22.569423 containerd[1709]: time="2026-04-17T23:46:22.568823461Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.548315214s" Apr 17 23:46:22.569423 containerd[1709]: time="2026-04-17T23:46:22.568861261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 17 23:46:22.570019 containerd[1709]: time="2026-04-17T23:46:22.569991774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 17 23:46:22.598670 containerd[1709]: time="2026-04-17T23:46:22.598620405Z" level=info msg="CreateContainer within sandbox \"dd378cabd43c0657bd0c6b9d10414598c9f2ebb63b20fbe0e1f93923a1f6400e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 17 23:46:22.632592 containerd[1709]: time="2026-04-17T23:46:22.632539596Z" level=info msg="CreateContainer within sandbox \"dd378cabd43c0657bd0c6b9d10414598c9f2ebb63b20fbe0e1f93923a1f6400e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5c0d3df86dcfc34e2f6e60dfa0ccd929fa423a2252d57b6a79b3911e482d11b5\"" Apr 17 23:46:22.633300 containerd[1709]: time="2026-04-17T23:46:22.633224504Z" level=info msg="StartContainer for \"5c0d3df86dcfc34e2f6e60dfa0ccd929fa423a2252d57b6a79b3911e482d11b5\"" Apr 17 23:46:22.671926 systemd[1]: Started cri-containerd-5c0d3df86dcfc34e2f6e60dfa0ccd929fa423a2252d57b6a79b3911e482d11b5.scope - libcontainer container 5c0d3df86dcfc34e2f6e60dfa0ccd929fa423a2252d57b6a79b3911e482d11b5. Apr 17 23:46:22.718133 containerd[1709]: time="2026-04-17T23:46:22.717321375Z" level=info msg="StartContainer for \"5c0d3df86dcfc34e2f6e60dfa0ccd929fa423a2252d57b6a79b3911e482d11b5\" returns successfully" Apr 17 23:46:23.331410 kubelet[3225]: E0417 23:46:23.331360 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vtqds" podUID="d55082e2-e0fa-4118-b796-695fc5437662" Apr 17 23:46:23.444030 kubelet[3225]: E0417 23:46:23.443995 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:23.444030 kubelet[3225]: W0417 23:46:23.444020 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:23.444258 kubelet[3225]: E0417 23:46:23.444042 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:23.444337 kubelet[3225]: E0417 23:46:23.444320 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:23.444388 kubelet[3225]: W0417 23:46:23.444338 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:23.444388 kubelet[3225]: E0417 23:46:23.444355 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:23.444585 kubelet[3225]: E0417 23:46:23.444568 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:23.444585 kubelet[3225]: W0417 23:46:23.444582 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:23.444697 kubelet[3225]: E0417 23:46:23.444596 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:23.444887 kubelet[3225]: E0417 23:46:23.444867 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:23.444887 kubelet[3225]: W0417 23:46:23.444884 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:23.445009 kubelet[3225]: E0417 23:46:23.444899 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:23.445135 kubelet[3225]: E0417 23:46:23.445119 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:23.445135 kubelet[3225]: W0417 23:46:23.445134 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:23.445250 kubelet[3225]: E0417 23:46:23.445147 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:23.445359 kubelet[3225]: E0417 23:46:23.445342 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:23.445359 kubelet[3225]: W0417 23:46:23.445357 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:23.445470 kubelet[3225]: E0417 23:46:23.445370 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:23.445576 kubelet[3225]: E0417 23:46:23.445561 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:23.445634 kubelet[3225]: W0417 23:46:23.445586 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:23.445634 kubelet[3225]: E0417 23:46:23.445599 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:23.445874 kubelet[3225]: E0417 23:46:23.445854 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:23.445874 kubelet[3225]: W0417 23:46:23.445872 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:23.445996 kubelet[3225]: E0417 23:46:23.445889 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:23.446145 kubelet[3225]: E0417 23:46:23.446129 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:23.446208 kubelet[3225]: W0417 23:46:23.446144 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:23.446208 kubelet[3225]: E0417 23:46:23.446158 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:23.446369 kubelet[3225]: E0417 23:46:23.446351 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:23.446369 kubelet[3225]: W0417 23:46:23.446367 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:23.446482 kubelet[3225]: E0417 23:46:23.446380 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:23.446584 kubelet[3225]: E0417 23:46:23.446568 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:23.446584 kubelet[3225]: W0417 23:46:23.446583 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:23.446691 kubelet[3225]: E0417 23:46:23.446596 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:23.446850 kubelet[3225]: E0417 23:46:23.446820 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:23.446850 kubelet[3225]: W0417 23:46:23.446832 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:23.446850 kubelet[3225]: E0417 23:46:23.446844 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:23.447062 kubelet[3225]: E0417 23:46:23.447046 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:23.447062 kubelet[3225]: W0417 23:46:23.447060 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:23.447180 kubelet[3225]: E0417 23:46:23.447073 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:23.447290 kubelet[3225]: E0417 23:46:23.447275 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:23.447290 kubelet[3225]: W0417 23:46:23.447289 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:23.447392 kubelet[3225]: E0417 23:46:23.447301 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:23.447520 kubelet[3225]: E0417 23:46:23.447506 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:23.447520 kubelet[3225]: W0417 23:46:23.447519 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:23.447611 kubelet[3225]: E0417 23:46:23.447533 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:23.465813 kubelet[3225]: E0417 23:46:23.465777 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:23.465813 kubelet[3225]: W0417 23:46:23.465803 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:23.466040 kubelet[3225]: E0417 23:46:23.465826 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:23.466182 kubelet[3225]: E0417 23:46:23.466159 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:23.466182 kubelet[3225]: W0417 23:46:23.466178 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:23.466370 kubelet[3225]: E0417 23:46:23.466198 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:23.466469 kubelet[3225]: E0417 23:46:23.466454 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:23.466521 kubelet[3225]: W0417 23:46:23.466510 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:23.466565 kubelet[3225]: E0417 23:46:23.466527 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:23.466829 kubelet[3225]: E0417 23:46:23.466809 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:23.466829 kubelet[3225]: W0417 23:46:23.466827 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:23.466971 kubelet[3225]: E0417 23:46:23.466842 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:23.467102 kubelet[3225]: E0417 23:46:23.467085 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:23.467102 kubelet[3225]: W0417 23:46:23.467100 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:23.467205 kubelet[3225]: E0417 23:46:23.467114 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:23.467358 kubelet[3225]: E0417 23:46:23.467338 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:23.467358 kubelet[3225]: W0417 23:46:23.467356 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:23.467523 kubelet[3225]: E0417 23:46:23.467370 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:23.467646 kubelet[3225]: E0417 23:46:23.467615 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:23.467646 kubelet[3225]: W0417 23:46:23.467627 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:23.467646 kubelet[3225]: E0417 23:46:23.467640 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:23.467884 kubelet[3225]: E0417 23:46:23.467876 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:23.467955 kubelet[3225]: W0417 23:46:23.467888 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:23.467955 kubelet[3225]: E0417 23:46:23.467902 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:23.468163 kubelet[3225]: E0417 23:46:23.468145 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:23.468163 kubelet[3225]: W0417 23:46:23.468160 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:23.468265 kubelet[3225]: E0417 23:46:23.468175 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:23.468555 kubelet[3225]: E0417 23:46:23.468536 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:23.468555 kubelet[3225]: W0417 23:46:23.468551 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:23.468677 kubelet[3225]: E0417 23:46:23.468565 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:23.468863 kubelet[3225]: E0417 23:46:23.468845 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:23.468863 kubelet[3225]: W0417 23:46:23.468860 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:23.468974 kubelet[3225]: E0417 23:46:23.468875 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:23.469154 kubelet[3225]: E0417 23:46:23.469135 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:23.469154 kubelet[3225]: W0417 23:46:23.469153 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:23.469266 kubelet[3225]: E0417 23:46:23.469168 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:23.469751 kubelet[3225]: E0417 23:46:23.469703 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:23.469839 kubelet[3225]: W0417 23:46:23.469764 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:23.469839 kubelet[3225]: E0417 23:46:23.469780 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:23.470051 kubelet[3225]: E0417 23:46:23.470031 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:23.470051 kubelet[3225]: W0417 23:46:23.470049 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:23.470161 kubelet[3225]: E0417 23:46:23.470063 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:23.470351 kubelet[3225]: E0417 23:46:23.470331 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:23.470351 kubelet[3225]: W0417 23:46:23.470347 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:23.470484 kubelet[3225]: E0417 23:46:23.470361 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:23.470660 kubelet[3225]: E0417 23:46:23.470641 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:23.470747 kubelet[3225]: W0417 23:46:23.470665 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:23.470747 kubelet[3225]: E0417 23:46:23.470679 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:23.470992 kubelet[3225]: E0417 23:46:23.470958 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:23.471056 kubelet[3225]: W0417 23:46:23.470996 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:23.471056 kubelet[3225]: E0417 23:46:23.471010 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:23.471588 kubelet[3225]: E0417 23:46:23.471446 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:46:23.471588 kubelet[3225]: W0417 23:46:23.471479 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:46:23.471588 kubelet[3225]: E0417 23:46:23.471494 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:46:23.745471 containerd[1709]: time="2026-04-17T23:46:23.745411542Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:46:23.748334 containerd[1709]: time="2026-04-17T23:46:23.748265074Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 17 23:46:23.751401 containerd[1709]: time="2026-04-17T23:46:23.751346210Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:46:23.755658 containerd[1709]: time="2026-04-17T23:46:23.755622059Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:46:23.757377 containerd[1709]: time="2026-04-17T23:46:23.757273578Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.186656397s" Apr 17 23:46:23.757377 containerd[1709]: time="2026-04-17T23:46:23.757312179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 17 23:46:23.768329 containerd[1709]: time="2026-04-17T23:46:23.768195204Z" level=info msg="CreateContainer within sandbox \"4507aeffbb4ec13355467591077af6d7de37c23101c7799f136731cf9aa81386\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 17 23:46:23.800866 containerd[1709]: time="2026-04-17T23:46:23.800811981Z" level=info msg="CreateContainer within sandbox \"4507aeffbb4ec13355467591077af6d7de37c23101c7799f136731cf9aa81386\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7e3e4f979f72682d312b5348c2d78e91eee5607e475c4c1382a170bd67552de3\"" Apr 17 23:46:23.801455 containerd[1709]: time="2026-04-17T23:46:23.801423988Z" level=info msg="StartContainer for \"7e3e4f979f72682d312b5348c2d78e91eee5607e475c4c1382a170bd67552de3\"" Apr 17 23:46:23.835504 systemd[1]: run-containerd-runc-k8s.io-7e3e4f979f72682d312b5348c2d78e91eee5607e475c4c1382a170bd67552de3-runc.oY98H2.mount: Deactivated successfully. Apr 17 23:46:23.844852 systemd[1]: Started cri-containerd-7e3e4f979f72682d312b5348c2d78e91eee5607e475c4c1382a170bd67552de3.scope - libcontainer container 7e3e4f979f72682d312b5348c2d78e91eee5607e475c4c1382a170bd67552de3. Apr 17 23:46:23.877116 containerd[1709]: time="2026-04-17T23:46:23.877025761Z" level=info msg="StartContainer for \"7e3e4f979f72682d312b5348c2d78e91eee5607e475c4c1382a170bd67552de3\" returns successfully" Apr 17 23:46:23.885175 systemd[1]: cri-containerd-7e3e4f979f72682d312b5348c2d78e91eee5607e475c4c1382a170bd67552de3.scope: Deactivated successfully. Apr 17 23:46:24.439896 kubelet[3225]: I0417 23:46:24.439167 3225 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:46:24.466683 kubelet[3225]: I0417 23:46:24.466076 3225 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5bb97989d5-5l89g" podStartSLOduration=2.91638823 podStartE2EDuration="5.46605556s" podCreationTimestamp="2026-04-17 23:46:19 +0000 UTC" firstStartedPulling="2026-04-17 23:46:20.020091542 +0000 UTC m=+22.866804849" lastFinishedPulling="2026-04-17 23:46:22.569758872 +0000 UTC m=+25.416472179" observedRunningTime="2026-04-17 23:46:23.449978531 +0000 UTC m=+26.296691738" watchObservedRunningTime="2026-04-17 23:46:24.46605556 +0000 UTC m=+27.312768767" Apr 17 23:46:24.577687 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e3e4f979f72682d312b5348c2d78e91eee5607e475c4c1382a170bd67552de3-rootfs.mount: Deactivated successfully. Apr 17 23:46:25.330129 kubelet[3225]: E0417 23:46:25.328684 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vtqds" podUID="d55082e2-e0fa-4118-b796-695fc5437662" Apr 17 23:46:25.792490 containerd[1709]: time="2026-04-17T23:46:25.792424186Z" level=info msg="shim disconnected" id=7e3e4f979f72682d312b5348c2d78e91eee5607e475c4c1382a170bd67552de3 namespace=k8s.io Apr 17 23:46:25.793020 containerd[1709]: time="2026-04-17T23:46:25.792496087Z" level=warning msg="cleaning up after shim disconnected" id=7e3e4f979f72682d312b5348c2d78e91eee5607e475c4c1382a170bd67552de3 namespace=k8s.io Apr 17 23:46:25.793020 containerd[1709]: time="2026-04-17T23:46:25.792507787Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:46:26.445788 containerd[1709]: time="2026-04-17T23:46:26.445490308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 17 23:46:27.329342 kubelet[3225]: E0417 23:46:27.328431 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vtqds" podUID="d55082e2-e0fa-4118-b796-695fc5437662" Apr 17 23:46:27.415514 kubelet[3225]: I0417 23:46:27.415087 3225 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:46:29.330722 kubelet[3225]: E0417 23:46:29.330098 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vtqds" podUID="d55082e2-e0fa-4118-b796-695fc5437662" Apr 17 23:46:31.329370 kubelet[3225]: E0417 23:46:31.328763 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vtqds" podUID="d55082e2-e0fa-4118-b796-695fc5437662" Apr 17 23:46:33.329088 kubelet[3225]: E0417 23:46:33.328635 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vtqds" podUID="d55082e2-e0fa-4118-b796-695fc5437662" Apr 17 23:46:35.731670 kubelet[3225]: E0417 23:46:35.328292 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vtqds" podUID="d55082e2-e0fa-4118-b796-695fc5437662" Apr 17 23:46:37.330547 kubelet[3225]: E0417 23:46:37.330499 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vtqds" podUID="d55082e2-e0fa-4118-b796-695fc5437662" Apr 17 23:46:38.846776 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1005262387.mount: Deactivated successfully. Apr 17 23:46:38.883776 containerd[1709]: time="2026-04-17T23:46:38.883722276Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:46:38.885986 containerd[1709]: time="2026-04-17T23:46:38.885920306Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 17 23:46:38.888488 containerd[1709]: time="2026-04-17T23:46:38.888430740Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:46:38.892533 containerd[1709]: time="2026-04-17T23:46:38.892479695Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:46:38.893977 containerd[1709]: time="2026-04-17T23:46:38.893400107Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 12.447587794s" Apr 17 23:46:38.893977 containerd[1709]: time="2026-04-17T23:46:38.893457308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 17 23:46:38.905833 containerd[1709]: time="2026-04-17T23:46:38.905803575Z" level=info msg="CreateContainer within sandbox \"4507aeffbb4ec13355467591077af6d7de37c23101c7799f136731cf9aa81386\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 17 23:46:38.931138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1301194425.mount: Deactivated successfully. Apr 17 23:46:38.940797 containerd[1709]: time="2026-04-17T23:46:38.940731649Z" level=info msg="CreateContainer within sandbox \"4507aeffbb4ec13355467591077af6d7de37c23101c7799f136731cf9aa81386\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"e7bbc64c69a9a95813ce5423e7a1bdeb0cac7e3ba548c413d2f4cbbcf8fcec35\"" Apr 17 23:46:38.942211 containerd[1709]: time="2026-04-17T23:46:38.941394958Z" level=info msg="StartContainer for \"e7bbc64c69a9a95813ce5423e7a1bdeb0cac7e3ba548c413d2f4cbbcf8fcec35\"" Apr 17 23:46:38.977879 systemd[1]: Started cri-containerd-e7bbc64c69a9a95813ce5423e7a1bdeb0cac7e3ba548c413d2f4cbbcf8fcec35.scope - libcontainer container e7bbc64c69a9a95813ce5423e7a1bdeb0cac7e3ba548c413d2f4cbbcf8fcec35. Apr 17 23:46:39.011088 containerd[1709]: time="2026-04-17T23:46:39.011039804Z" level=info msg="StartContainer for \"e7bbc64c69a9a95813ce5423e7a1bdeb0cac7e3ba548c413d2f4cbbcf8fcec35\" returns successfully" Apr 17 23:46:39.047818 systemd[1]: cri-containerd-e7bbc64c69a9a95813ce5423e7a1bdeb0cac7e3ba548c413d2f4cbbcf8fcec35.scope: Deactivated successfully. Apr 17 23:46:40.188208 kubelet[3225]: E0417 23:46:39.328258 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vtqds" podUID="d55082e2-e0fa-4118-b796-695fc5437662" Apr 17 23:46:39.845832 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e7bbc64c69a9a95813ce5423e7a1bdeb0cac7e3ba548c413d2f4cbbcf8fcec35-rootfs.mount: Deactivated successfully. Apr 17 23:46:41.329870 kubelet[3225]: E0417 23:46:41.328799 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vtqds" podUID="d55082e2-e0fa-4118-b796-695fc5437662" Apr 17 23:46:43.330887 kubelet[3225]: E0417 23:46:43.328826 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vtqds" podUID="d55082e2-e0fa-4118-b796-695fc5437662" Apr 17 23:46:45.328778 kubelet[3225]: E0417 23:46:45.328693 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vtqds" podUID="d55082e2-e0fa-4118-b796-695fc5437662" Apr 17 23:46:46.891528 containerd[1709]: time="2026-04-17T23:46:46.891451601Z" level=info msg="shim disconnected" id=e7bbc64c69a9a95813ce5423e7a1bdeb0cac7e3ba548c413d2f4cbbcf8fcec35 namespace=k8s.io Apr 17 23:46:46.891528 containerd[1709]: time="2026-04-17T23:46:46.891525102Z" level=warning msg="cleaning up after shim disconnected" id=e7bbc64c69a9a95813ce5423e7a1bdeb0cac7e3ba548c413d2f4cbbcf8fcec35 namespace=k8s.io Apr 17 23:46:46.891528 containerd[1709]: time="2026-04-17T23:46:46.891538603Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:46:46.904168 containerd[1709]: time="2026-04-17T23:46:46.904109681Z" level=warning msg="cleanup warnings time=\"2026-04-17T23:46:46Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 17 23:46:47.330584 kubelet[3225]: E0417 23:46:47.328911 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vtqds" podUID="d55082e2-e0fa-4118-b796-695fc5437662" Apr 17 23:46:47.493290 containerd[1709]: time="2026-04-17T23:46:47.493035558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 17 23:46:49.330042 kubelet[3225]: E0417 23:46:49.328838 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vtqds" podUID="d55082e2-e0fa-4118-b796-695fc5437662" Apr 17 23:46:51.328742 kubelet[3225]: E0417 23:46:51.328673 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vtqds" podUID="d55082e2-e0fa-4118-b796-695fc5437662" Apr 17 23:46:52.488540 containerd[1709]: time="2026-04-17T23:46:52.488481962Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:46:52.491679 containerd[1709]: time="2026-04-17T23:46:52.491519007Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 17 23:46:52.494259 containerd[1709]: time="2026-04-17T23:46:52.494194047Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:46:52.498469 containerd[1709]: time="2026-04-17T23:46:52.498419809Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:46:52.499351 containerd[1709]: time="2026-04-17T23:46:52.499179821Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 5.006083661s" Apr 17 23:46:52.499351 containerd[1709]: time="2026-04-17T23:46:52.499218721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 17 23:46:52.508149 containerd[1709]: time="2026-04-17T23:46:52.508113553Z" level=info msg="CreateContainer within sandbox \"4507aeffbb4ec13355467591077af6d7de37c23101c7799f136731cf9aa81386\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 17 23:46:52.545315 containerd[1709]: time="2026-04-17T23:46:52.545274004Z" level=info msg="CreateContainer within sandbox \"4507aeffbb4ec13355467591077af6d7de37c23101c7799f136731cf9aa81386\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ce0d230c328a757a66c5d638abc78fb9899b38fc81db5105f192bf62ead83b0b\"" Apr 17 23:46:52.545987 containerd[1709]: time="2026-04-17T23:46:52.545855212Z" level=info msg="StartContainer for \"ce0d230c328a757a66c5d638abc78fb9899b38fc81db5105f192bf62ead83b0b\"" Apr 17 23:46:52.584859 systemd[1]: Started cri-containerd-ce0d230c328a757a66c5d638abc78fb9899b38fc81db5105f192bf62ead83b0b.scope - libcontainer container ce0d230c328a757a66c5d638abc78fb9899b38fc81db5105f192bf62ead83b0b. Apr 17 23:46:52.614023 containerd[1709]: time="2026-04-17T23:46:52.613944521Z" level=info msg="StartContainer for \"ce0d230c328a757a66c5d638abc78fb9899b38fc81db5105f192bf62ead83b0b\" returns successfully" Apr 17 23:46:53.329637 kubelet[3225]: E0417 23:46:53.328935 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vtqds" podUID="d55082e2-e0fa-4118-b796-695fc5437662" Apr 17 23:46:55.331114 kubelet[3225]: E0417 23:46:55.330964 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vtqds" podUID="d55082e2-e0fa-4118-b796-695fc5437662" Apr 17 23:46:57.330580 kubelet[3225]: E0417 23:46:57.329015 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vtqds" podUID="d55082e2-e0fa-4118-b796-695fc5437662" Apr 17 23:46:59.328445 kubelet[3225]: E0417 23:46:59.328035 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vtqds" podUID="d55082e2-e0fa-4118-b796-695fc5437662" Apr 17 23:46:59.596697 containerd[1709]: time="2026-04-17T23:46:59.596562758Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 23:46:59.600397 systemd[1]: cri-containerd-ce0d230c328a757a66c5d638abc78fb9899b38fc81db5105f192bf62ead83b0b.scope: Deactivated successfully. Apr 17 23:46:59.623091 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce0d230c328a757a66c5d638abc78fb9899b38fc81db5105f192bf62ead83b0b-rootfs.mount: Deactivated successfully. Apr 17 23:46:59.628830 kubelet[3225]: I0417 23:46:59.628434 3225 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Apr 17 23:47:05.891703 containerd[1709]: time="2026-04-17T23:47:05.891568517Z" level=info msg="shim disconnected" id=ce0d230c328a757a66c5d638abc78fb9899b38fc81db5105f192bf62ead83b0b namespace=k8s.io Apr 17 23:47:05.892534 containerd[1709]: time="2026-04-17T23:47:05.891670818Z" level=warning msg="cleaning up after shim disconnected" id=ce0d230c328a757a66c5d638abc78fb9899b38fc81db5105f192bf62ead83b0b namespace=k8s.io Apr 17 23:47:05.892534 containerd[1709]: time="2026-04-17T23:47:05.891796920Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:47:05.901351 systemd[1]: Created slice kubepods-burstable-pod099fd00a_16a5_4661_ac32_2536e3c7653c.slice - libcontainer container kubepods-burstable-pod099fd00a_16a5_4661_ac32_2536e3c7653c.slice. Apr 17 23:47:05.917132 systemd[1]: Created slice kubepods-burstable-poda634795c_d355_406a_b830_50fc4384862e.slice - libcontainer container kubepods-burstable-poda634795c_d355_406a_b830_50fc4384862e.slice. Apr 17 23:47:05.938526 systemd[1]: Created slice kubepods-besteffort-podd55082e2_e0fa_4118_b796_695fc5437662.slice - libcontainer container kubepods-besteffort-podd55082e2_e0fa_4118_b796_695fc5437662.slice. Apr 17 23:47:05.954432 containerd[1709]: time="2026-04-17T23:47:05.954005729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vtqds,Uid:d55082e2-e0fa-4118-b796-695fc5437662,Namespace:calico-system,Attempt:0,}" Apr 17 23:47:05.955117 systemd[1]: Created slice kubepods-besteffort-pod7034716d_ae1e_4a07_9d47_901268c9b69a.slice - libcontainer container kubepods-besteffort-pod7034716d_ae1e_4a07_9d47_901268c9b69a.slice. Apr 17 23:47:05.971292 kubelet[3225]: I0417 23:47:05.971240 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/099fd00a-16a5-4661-ac32-2536e3c7653c-config-volume\") pod \"coredns-66bc5c9577-6rrjw\" (UID: \"099fd00a-16a5-4661-ac32-2536e3c7653c\") " pod="kube-system/coredns-66bc5c9577-6rrjw" Apr 17 23:47:05.973021 kubelet[3225]: I0417 23:47:05.971300 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgr8r\" (UniqueName: \"kubernetes.io/projected/099fd00a-16a5-4661-ac32-2536e3c7653c-kube-api-access-rgr8r\") pod \"coredns-66bc5c9577-6rrjw\" (UID: \"099fd00a-16a5-4661-ac32-2536e3c7653c\") " pod="kube-system/coredns-66bc5c9577-6rrjw" Apr 17 23:47:05.973021 kubelet[3225]: I0417 23:47:05.971324 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a634795c-d355-406a-b830-50fc4384862e-config-volume\") pod \"coredns-66bc5c9577-8dtqd\" (UID: \"a634795c-d355-406a-b830-50fc4384862e\") " pod="kube-system/coredns-66bc5c9577-8dtqd" Apr 17 23:47:05.973021 kubelet[3225]: I0417 23:47:05.971345 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7034716d-ae1e-4a07-9d47-901268c9b69a-calico-apiserver-certs\") pod \"calico-apiserver-5f7ccb55fc-dvppd\" (UID: \"7034716d-ae1e-4a07-9d47-901268c9b69a\") " pod="calico-system/calico-apiserver-5f7ccb55fc-dvppd" Apr 17 23:47:05.973021 kubelet[3225]: I0417 23:47:05.971377 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pshhl\" (UniqueName: \"kubernetes.io/projected/7034716d-ae1e-4a07-9d47-901268c9b69a-kube-api-access-pshhl\") pod \"calico-apiserver-5f7ccb55fc-dvppd\" (UID: \"7034716d-ae1e-4a07-9d47-901268c9b69a\") " pod="calico-system/calico-apiserver-5f7ccb55fc-dvppd" Apr 17 23:47:05.973021 kubelet[3225]: I0417 23:47:05.971454 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zrl5\" (UniqueName: \"kubernetes.io/projected/a634795c-d355-406a-b830-50fc4384862e-kube-api-access-6zrl5\") pod \"coredns-66bc5c9577-8dtqd\" (UID: \"a634795c-d355-406a-b830-50fc4384862e\") " pod="kube-system/coredns-66bc5c9577-8dtqd" Apr 17 23:47:05.973306 systemd[1]: Created slice kubepods-besteffort-podd0a502cd_d376_426a_82a8_7e44f6c46407.slice - libcontainer container kubepods-besteffort-podd0a502cd_d376_426a_82a8_7e44f6c46407.slice. Apr 17 23:47:05.983279 systemd[1]: Created slice kubepods-besteffort-pod5f78a19d_36c0_4d0e_b651_34be58a4bc17.slice - libcontainer container kubepods-besteffort-pod5f78a19d_36c0_4d0e_b651_34be58a4bc17.slice. Apr 17 23:47:06.021690 systemd[1]: Created slice kubepods-besteffort-pod8a1318ab_ffd2_447a_b894_d520f5a1dd65.slice - libcontainer container kubepods-besteffort-pod8a1318ab_ffd2_447a_b894_d520f5a1dd65.slice. Apr 17 23:47:06.027464 systemd[1]: Created slice kubepods-besteffort-pod43b4c361_5e57_4a81_a9ff_79904f9465e0.slice - libcontainer container kubepods-besteffort-pod43b4c361_5e57_4a81_a9ff_79904f9465e0.slice. Apr 17 23:47:06.064128 containerd[1709]: time="2026-04-17T23:47:06.064066560Z" level=error msg="Failed to destroy network for sandbox \"85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:47:06.065899 containerd[1709]: time="2026-04-17T23:47:06.065154175Z" level=error msg="encountered an error cleaning up failed sandbox \"85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:47:06.065899 containerd[1709]: time="2026-04-17T23:47:06.065227075Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vtqds,Uid:d55082e2-e0fa-4118-b796-695fc5437662,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:47:06.067600 kubelet[3225]: E0417 23:47:06.067554 3225 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:47:06.067715 kubelet[3225]: E0417 23:47:06.067642 3225 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vtqds" Apr 17 23:47:06.067715 kubelet[3225]: E0417 23:47:06.067670 3225 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vtqds" Apr 17 23:47:06.067809 kubelet[3225]: E0417 23:47:06.067756 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vtqds_calico-system(d55082e2-e0fa-4118-b796-695fc5437662)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vtqds_calico-system(d55082e2-e0fa-4118-b796-695fc5437662)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vtqds" podUID="d55082e2-e0fa-4118-b796-695fc5437662" Apr 17 23:47:06.067921 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476-shm.mount: Deactivated successfully. Apr 17 23:47:06.074492 kubelet[3225]: I0417 23:47:06.071933 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/43b4c361-5e57-4a81-a9ff-79904f9465e0-whisker-backend-key-pair\") pod \"whisker-f6b6dc666-6z9zk\" (UID: \"43b4c361-5e57-4a81-a9ff-79904f9465e0\") " pod="calico-system/whisker-f6b6dc666-6z9zk" Apr 17 23:47:06.074492 kubelet[3225]: I0417 23:47:06.071980 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d0a502cd-d376-426a-82a8-7e44f6c46407-calico-apiserver-certs\") pod \"calico-apiserver-5f7ccb55fc-wnsrc\" (UID: \"d0a502cd-d376-426a-82a8-7e44f6c46407\") " pod="calico-system/calico-apiserver-5f7ccb55fc-wnsrc" Apr 17 23:47:06.074492 kubelet[3225]: I0417 23:47:06.072003 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrmpt\" (UniqueName: \"kubernetes.io/projected/d0a502cd-d376-426a-82a8-7e44f6c46407-kube-api-access-qrmpt\") pod \"calico-apiserver-5f7ccb55fc-wnsrc\" (UID: \"d0a502cd-d376-426a-82a8-7e44f6c46407\") " pod="calico-system/calico-apiserver-5f7ccb55fc-wnsrc" Apr 17 23:47:06.074492 kubelet[3225]: I0417 23:47:06.072026 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f78a19d-36c0-4d0e-b651-34be58a4bc17-tigera-ca-bundle\") pod \"calico-kube-controllers-9d6dc9bbd-6sslf\" (UID: \"5f78a19d-36c0-4d0e-b651-34be58a4bc17\") " pod="calico-system/calico-kube-controllers-9d6dc9bbd-6sslf" Apr 17 23:47:06.074492 kubelet[3225]: I0417 23:47:06.072046 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/8a1318ab-ffd2-447a-b894-d520f5a1dd65-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-ql4z8\" (UID: \"8a1318ab-ffd2-447a-b894-d520f5a1dd65\") " pod="calico-system/goldmane-cccfbd5cf-ql4z8" Apr 17 23:47:06.074840 kubelet[3225]: I0417 23:47:06.072108 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43b4c361-5e57-4a81-a9ff-79904f9465e0-whisker-ca-bundle\") pod \"whisker-f6b6dc666-6z9zk\" (UID: \"43b4c361-5e57-4a81-a9ff-79904f9465e0\") " pod="calico-system/whisker-f6b6dc666-6z9zk" Apr 17 23:47:06.074840 kubelet[3225]: I0417 23:47:06.072130 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xwlh\" (UniqueName: \"kubernetes.io/projected/8a1318ab-ffd2-447a-b894-d520f5a1dd65-kube-api-access-8xwlh\") pod \"goldmane-cccfbd5cf-ql4z8\" (UID: \"8a1318ab-ffd2-447a-b894-d520f5a1dd65\") " pod="calico-system/goldmane-cccfbd5cf-ql4z8" Apr 17 23:47:06.074840 kubelet[3225]: I0417 23:47:06.072171 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/43b4c361-5e57-4a81-a9ff-79904f9465e0-nginx-config\") pod \"whisker-f6b6dc666-6z9zk\" (UID: \"43b4c361-5e57-4a81-a9ff-79904f9465e0\") " pod="calico-system/whisker-f6b6dc666-6z9zk" Apr 17 23:47:06.074840 kubelet[3225]: I0417 23:47:06.072195 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzkv8\" (UniqueName: \"kubernetes.io/projected/5f78a19d-36c0-4d0e-b651-34be58a4bc17-kube-api-access-bzkv8\") pod \"calico-kube-controllers-9d6dc9bbd-6sslf\" (UID: \"5f78a19d-36c0-4d0e-b651-34be58a4bc17\") " pod="calico-system/calico-kube-controllers-9d6dc9bbd-6sslf" Apr 17 23:47:06.074840 kubelet[3225]: I0417 23:47:06.072221 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a1318ab-ffd2-447a-b894-d520f5a1dd65-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-ql4z8\" (UID: \"8a1318ab-ffd2-447a-b894-d520f5a1dd65\") " pod="calico-system/goldmane-cccfbd5cf-ql4z8" Apr 17 23:47:06.075105 kubelet[3225]: I0417 23:47:06.072259 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a1318ab-ffd2-447a-b894-d520f5a1dd65-config\") pod \"goldmane-cccfbd5cf-ql4z8\" (UID: \"8a1318ab-ffd2-447a-b894-d520f5a1dd65\") " pod="calico-system/goldmane-cccfbd5cf-ql4z8" Apr 17 23:47:06.075105 kubelet[3225]: I0417 23:47:06.072287 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5q6f\" (UniqueName: \"kubernetes.io/projected/43b4c361-5e57-4a81-a9ff-79904f9465e0-kube-api-access-v5q6f\") pod \"whisker-f6b6dc666-6z9zk\" (UID: \"43b4c361-5e57-4a81-a9ff-79904f9465e0\") " pod="calico-system/whisker-f6b6dc666-6z9zk" Apr 17 23:47:06.221659 containerd[1709]: time="2026-04-17T23:47:06.221502208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6rrjw,Uid:099fd00a-16a5-4661-ac32-2536e3c7653c,Namespace:kube-system,Attempt:0,}" Apr 17 23:47:06.239159 containerd[1709]: time="2026-04-17T23:47:06.239116237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8dtqd,Uid:a634795c-d355-406a-b830-50fc4384862e,Namespace:kube-system,Attempt:0,}" Apr 17 23:47:06.270901 containerd[1709]: time="2026-04-17T23:47:06.270853150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f7ccb55fc-dvppd,Uid:7034716d-ae1e-4a07-9d47-901268c9b69a,Namespace:calico-system,Attempt:0,}" Apr 17 23:47:06.295309 containerd[1709]: time="2026-04-17T23:47:06.294206853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f7ccb55fc-wnsrc,Uid:d0a502cd-d376-426a-82a8-7e44f6c46407,Namespace:calico-system,Attempt:0,}" Apr 17 23:47:06.316296 containerd[1709]: time="2026-04-17T23:47:06.316246140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9d6dc9bbd-6sslf,Uid:5f78a19d-36c0-4d0e-b651-34be58a4bc17,Namespace:calico-system,Attempt:0,}" Apr 17 23:47:06.351509 containerd[1709]: time="2026-04-17T23:47:06.351455098Z" level=error msg="Failed to destroy network for sandbox \"b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:47:06.352070 containerd[1709]: time="2026-04-17T23:47:06.352028005Z" level=error msg="encountered an error cleaning up failed sandbox \"b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:47:06.352319 containerd[1709]: time="2026-04-17T23:47:06.352286909Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6rrjw,Uid:099fd00a-16a5-4661-ac32-2536e3c7653c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:47:06.353223 kubelet[3225]: E0417 23:47:06.352673 3225 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:47:06.353223 kubelet[3225]: E0417 23:47:06.352772 3225 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-6rrjw" Apr 17 23:47:06.353223 kubelet[3225]: E0417 23:47:06.352805 3225 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-6rrjw" Apr 17 23:47:06.353465 kubelet[3225]: E0417 23:47:06.352901 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-6rrjw_kube-system(099fd00a-16a5-4661-ac32-2536e3c7653c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-6rrjw_kube-system(099fd00a-16a5-4661-ac32-2536e3c7653c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-6rrjw" podUID="099fd00a-16a5-4661-ac32-2536e3c7653c" Apr 17 23:47:06.366920 containerd[1709]: time="2026-04-17T23:47:06.366872898Z" level=error msg="Failed to destroy network for sandbox \"445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:47:06.367277 containerd[1709]: time="2026-04-17T23:47:06.367244103Z" level=error msg="encountered an error cleaning up failed sandbox \"445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:47:06.367396 containerd[1709]: time="2026-04-17T23:47:06.367306404Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8dtqd,Uid:a634795c-d355-406a-b830-50fc4384862e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:47:06.367606 kubelet[3225]: E0417 23:47:06.367564 3225 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:47:06.367740 kubelet[3225]: E0417 23:47:06.367631 3225 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-8dtqd" Apr 17 23:47:06.367740 kubelet[3225]: E0417 23:47:06.367661 3225 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-8dtqd" Apr 17 23:47:06.368116 kubelet[3225]: E0417 23:47:06.367754 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-8dtqd_kube-system(a634795c-d355-406a-b830-50fc4384862e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-8dtqd_kube-system(a634795c-d355-406a-b830-50fc4384862e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-8dtqd" podUID="a634795c-d355-406a-b830-50fc4384862e" Apr 17 23:47:06.391305 containerd[1709]: time="2026-04-17T23:47:06.391266616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-ql4z8,Uid:8a1318ab-ffd2-447a-b894-d520f5a1dd65,Namespace:calico-system,Attempt:0,}" Apr 17 23:47:06.531194 containerd[1709]: time="2026-04-17T23:47:06.531144835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f6b6dc666-6z9zk,Uid:43b4c361-5e57-4a81-a9ff-79904f9465e0,Namespace:calico-system,Attempt:0,}" Apr 17 23:47:06.531723 kubelet[3225]: I0417 23:47:06.531669 3225 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb" Apr 17 23:47:06.533017 containerd[1709]: time="2026-04-17T23:47:06.532938658Z" level=info msg="StopPodSandbox for \"445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb\"" Apr 17 23:47:06.534224 containerd[1709]: time="2026-04-17T23:47:06.533264562Z" level=info msg="Ensure that sandbox 445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb in task-service has been cleanup successfully" Apr 17 23:47:06.536387 kubelet[3225]: I0417 23:47:06.535964 3225 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892" Apr 17 23:47:06.536787 containerd[1709]: time="2026-04-17T23:47:06.536660607Z" level=info msg="StopPodSandbox for \"b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892\"" Apr 17 23:47:06.537567 containerd[1709]: time="2026-04-17T23:47:06.537238714Z" level=info msg="Ensure that sandbox b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892 in task-service has been cleanup successfully" Apr 17 23:47:06.540811 kubelet[3225]: I0417 23:47:06.540230 3225 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476" Apr 17 23:47:06.541374 containerd[1709]: time="2026-04-17T23:47:06.541157065Z" level=info msg="StopPodSandbox for \"85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476\"" Apr 17 23:47:06.542063 containerd[1709]: time="2026-04-17T23:47:06.541939575Z" level=info msg="Ensure that sandbox 85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476 in task-service has been cleanup successfully" Apr 17 23:47:06.585536 containerd[1709]: time="2026-04-17T23:47:06.585488242Z" level=info msg="CreateContainer within sandbox \"4507aeffbb4ec13355467591077af6d7de37c23101c7799f136731cf9aa81386\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 17 23:47:06.615995 containerd[1709]: time="2026-04-17T23:47:06.615455531Z" level=error msg="StopPodSandbox for \"b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892\" failed" error="failed to destroy network for sandbox \"b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:47:06.616163 kubelet[3225]: E0417 23:47:06.615734 3225 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892" Apr 17 23:47:06.616163 kubelet[3225]: E0417 23:47:06.615797 3225 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892"} Apr 17 23:47:06.616163 kubelet[3225]: E0417 23:47:06.615863 3225 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"099fd00a-16a5-4661-ac32-2536e3c7653c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:47:06.616163 kubelet[3225]: E0417 23:47:06.615897 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"099fd00a-16a5-4661-ac32-2536e3c7653c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-6rrjw" podUID="099fd00a-16a5-4661-ac32-2536e3c7653c" Apr 17 23:47:06.625471 containerd[1709]: time="2026-04-17T23:47:06.625144357Z" level=error msg="StopPodSandbox for \"85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476\" failed" error="failed to destroy network for sandbox \"85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:47:06.625889 kubelet[3225]: E0417 23:47:06.625398 3225 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476" Apr 17 23:47:06.625889 kubelet[3225]: E0417 23:47:06.625449 3225 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476"} Apr 17 23:47:06.625889 kubelet[3225]: E0417 23:47:06.625484 3225 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d55082e2-e0fa-4118-b796-695fc5437662\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:47:06.625889 kubelet[3225]: E0417 23:47:06.625533 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d55082e2-e0fa-4118-b796-695fc5437662\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vtqds" podUID="d55082e2-e0fa-4118-b796-695fc5437662" Apr 17 23:47:06.629647 containerd[1709]: time="2026-04-17T23:47:06.629511614Z" level=error msg="StopPodSandbox for \"445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb\" failed" error="failed to destroy network for sandbox \"445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:47:06.629847 kubelet[3225]: E0417 23:47:06.629792 3225 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb" Apr 17 23:47:06.629936 kubelet[3225]: E0417 23:47:06.629864 3225 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb"} Apr 17 23:47:06.629936 kubelet[3225]: E0417 23:47:06.629909 3225 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a634795c-d355-406a-b830-50fc4384862e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:47:06.630050 kubelet[3225]: E0417 23:47:06.629941 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a634795c-d355-406a-b830-50fc4384862e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-8dtqd" podUID="a634795c-d355-406a-b830-50fc4384862e" Apr 17 23:47:06.697726 containerd[1709]: time="2026-04-17T23:47:06.697648700Z" level=error msg="Failed to destroy network for sandbox \"51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:47:06.698030 containerd[1709]: time="2026-04-17T23:47:06.697987805Z" level=error msg="encountered an error cleaning up failed sandbox \"51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:47:06.698119 containerd[1709]: time="2026-04-17T23:47:06.698047305Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f7ccb55fc-dvppd,Uid:7034716d-ae1e-4a07-9d47-901268c9b69a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:47:06.698428 kubelet[3225]: E0417 23:47:06.698345 3225 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:47:06.698534 kubelet[3225]: E0417 23:47:06.698423 3225 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5f7ccb55fc-dvppd" Apr 17 23:47:06.698534 kubelet[3225]: E0417 23:47:06.698449 3225 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5f7ccb55fc-dvppd" Apr 17 23:47:06.698874 kubelet[3225]: E0417 23:47:06.698539 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f7ccb55fc-dvppd_calico-system(7034716d-ae1e-4a07-9d47-901268c9b69a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f7ccb55fc-dvppd_calico-system(7034716d-ae1e-4a07-9d47-901268c9b69a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5f7ccb55fc-dvppd" podUID="7034716d-ae1e-4a07-9d47-901268c9b69a" Apr 17 23:47:06.738171 containerd[1709]: time="2026-04-17T23:47:06.736800509Z" level=info msg="CreateContainer within sandbox \"4507aeffbb4ec13355467591077af6d7de37c23101c7799f136731cf9aa81386\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0fe5ed3a644e2d2ea61608d0309b65a2e66698d0c7e790a4fbffb7e10034ce01\"" Apr 17 23:47:06.741389 containerd[1709]: time="2026-04-17T23:47:06.741351169Z" level=info msg="StartContainer for \"0fe5ed3a644e2d2ea61608d0309b65a2e66698d0c7e790a4fbffb7e10034ce01\"" Apr 17 23:47:06.850505 containerd[1709]: time="2026-04-17T23:47:06.850306986Z" level=error msg="Failed to destroy network for sandbox \"a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:47:06.857729 containerd[1709]: time="2026-04-17T23:47:06.857539480Z" level=error msg="encountered an error cleaning up failed sandbox \"a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:47:06.857729 containerd[1709]: time="2026-04-17T23:47:06.857640181Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f7ccb55fc-wnsrc,Uid:d0a502cd-d376-426a-82a8-7e44f6c46407,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:47:06.857970 kubelet[3225]: E0417 23:47:06.857911 3225 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:47:06.858034 kubelet[3225]: E0417 23:47:06.857972 3225 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5f7ccb55fc-wnsrc" Apr 17 23:47:06.858034 kubelet[3225]: E0417 23:47:06.857997 3225 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5f7ccb55fc-wnsrc" Apr 17 23:47:06.858129 kubelet[3225]: E0417 23:47:06.858070 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f7ccb55fc-wnsrc_calico-system(d0a502cd-d376-426a-82a8-7e44f6c46407)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f7ccb55fc-wnsrc_calico-system(d0a502cd-d376-426a-82a8-7e44f6c46407)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5f7ccb55fc-wnsrc" podUID="d0a502cd-d376-426a-82a8-7e44f6c46407" Apr 17 23:47:06.862312 systemd[1]: Started cri-containerd-0fe5ed3a644e2d2ea61608d0309b65a2e66698d0c7e790a4fbffb7e10034ce01.scope - libcontainer container 0fe5ed3a644e2d2ea61608d0309b65a2e66698d0c7e790a4fbffb7e10034ce01. Apr 17 23:47:06.906096 containerd[1709]: time="2026-04-17T23:47:06.905260300Z" level=error msg="Failed to destroy network for sandbox \"cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:47:06.906096 containerd[1709]: time="2026-04-17T23:47:06.905639705Z" level=error msg="encountered an error cleaning up failed sandbox \"cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:47:06.906096 containerd[1709]: time="2026-04-17T23:47:06.905746907Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9d6dc9bbd-6sslf,Uid:5f78a19d-36c0-4d0e-b651-34be58a4bc17,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:47:06.906652 kubelet[3225]: E0417 23:47:06.906141 3225 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:47:06.906652 kubelet[3225]: E0417 23:47:06.906465 3225 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9d6dc9bbd-6sslf" Apr 17 23:47:06.906652 kubelet[3225]: E0417 23:47:06.906493 3225 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9d6dc9bbd-6sslf" Apr 17 23:47:06.907168 kubelet[3225]: E0417 23:47:06.906575 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-9d6dc9bbd-6sslf_calico-system(5f78a19d-36c0-4d0e-b651-34be58a4bc17)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-9d6dc9bbd-6sslf_calico-system(5f78a19d-36c0-4d0e-b651-34be58a4bc17)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9d6dc9bbd-6sslf" podUID="5f78a19d-36c0-4d0e-b651-34be58a4bc17" Apr 17 23:47:06.924141 containerd[1709]: time="2026-04-17T23:47:06.923915043Z" level=error msg="Failed to destroy network for sandbox \"c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:47:06.924458 containerd[1709]: time="2026-04-17T23:47:06.924325948Z" level=error msg="encountered an error cleaning up failed sandbox \"c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:47:06.924458 containerd[1709]: time="2026-04-17T23:47:06.924404349Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-ql4z8,Uid:8a1318ab-ffd2-447a-b894-d520f5a1dd65,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:47:06.925223 kubelet[3225]: E0417 23:47:06.924665 3225 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:47:06.925223 kubelet[3225]: E0417 23:47:06.924745 3225 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-ql4z8" Apr 17 23:47:06.925223 kubelet[3225]: E0417 23:47:06.924775 3225 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-ql4z8" Apr 17 23:47:06.926846 kubelet[3225]: E0417 23:47:06.924885 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-ql4z8_calico-system(8a1318ab-ffd2-447a-b894-d520f5a1dd65)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-ql4z8_calico-system(8a1318ab-ffd2-447a-b894-d520f5a1dd65)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-ql4z8" podUID="8a1318ab-ffd2-447a-b894-d520f5a1dd65" Apr 17 23:47:06.936836 containerd[1709]: time="2026-04-17T23:47:06.936788310Z" level=info msg="StartContainer for \"0fe5ed3a644e2d2ea61608d0309b65a2e66698d0c7e790a4fbffb7e10034ce01\" returns successfully" Apr 17 23:47:06.938979 containerd[1709]: time="2026-04-17T23:47:06.938888538Z" level=error msg="Failed to destroy network for sandbox \"b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:47:06.939324 containerd[1709]: time="2026-04-17T23:47:06.939216742Z" level=error msg="encountered an error cleaning up failed sandbox \"b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:47:06.939324 containerd[1709]: time="2026-04-17T23:47:06.939278543Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f6b6dc666-6z9zk,Uid:43b4c361-5e57-4a81-a9ff-79904f9465e0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:47:06.939737 kubelet[3225]: E0417 23:47:06.939524 3225 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:47:06.939737 kubelet[3225]: E0417 23:47:06.939623 3225 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-f6b6dc666-6z9zk" Apr 17 23:47:06.939737 kubelet[3225]: E0417 23:47:06.939651 3225 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-f6b6dc666-6z9zk" Apr 17 23:47:06.939926 kubelet[3225]: E0417 23:47:06.939703 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-f6b6dc666-6z9zk_calico-system(43b4c361-5e57-4a81-a9ff-79904f9465e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-f6b6dc666-6z9zk_calico-system(43b4c361-5e57-4a81-a9ff-79904f9465e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-f6b6dc666-6z9zk" podUID="43b4c361-5e57-4a81-a9ff-79904f9465e0" Apr 17 23:47:07.557490 kubelet[3225]: I0417 23:47:07.557353 3225 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff" Apr 17 23:47:07.559294 containerd[1709]: time="2026-04-17T23:47:07.558702798Z" level=info msg="StopPodSandbox for \"51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff\"" Apr 17 23:47:07.559294 containerd[1709]: time="2026-04-17T23:47:07.558981002Z" level=info msg="Ensure that sandbox 51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff in task-service has been cleanup successfully" Apr 17 23:47:07.561293 kubelet[3225]: I0417 23:47:07.560548 3225 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7" Apr 17 23:47:07.563414 containerd[1709]: time="2026-04-17T23:47:07.563387459Z" level=info msg="StopPodSandbox for \"a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7\"" Apr 17 23:47:07.563729 containerd[1709]: time="2026-04-17T23:47:07.563685663Z" level=info msg="Ensure that sandbox a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7 in task-service has been cleanup successfully" Apr 17 23:47:07.563972 kubelet[3225]: I0417 23:47:07.563946 3225 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc" Apr 17 23:47:07.564763 containerd[1709]: time="2026-04-17T23:47:07.564700176Z" level=info msg="StopPodSandbox for \"b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc\"" Apr 17 23:47:07.566728 containerd[1709]: time="2026-04-17T23:47:07.565351185Z" level=info msg="Ensure that sandbox b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc in task-service has been cleanup successfully" Apr 17 23:47:07.571483 kubelet[3225]: I0417 23:47:07.571459 3225 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b" Apr 17 23:47:07.573799 containerd[1709]: time="2026-04-17T23:47:07.573767494Z" level=info msg="StopPodSandbox for \"c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b\"" Apr 17 23:47:07.574242 containerd[1709]: time="2026-04-17T23:47:07.574194500Z" level=info msg="Ensure that sandbox c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b in task-service has been cleanup successfully" Apr 17 23:47:07.579405 kubelet[3225]: I0417 23:47:07.579383 3225 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e" Apr 17 23:47:07.584109 containerd[1709]: time="2026-04-17T23:47:07.584072328Z" level=info msg="StopPodSandbox for \"cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e\"" Apr 17 23:47:07.584790 containerd[1709]: time="2026-04-17T23:47:07.584755337Z" level=info msg="Ensure that sandbox cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e in task-service has been cleanup successfully" Apr 17 23:47:07.641161 kubelet[3225]: I0417 23:47:07.640624 3225 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-989d2" podStartSLOduration=16.322986045 podStartE2EDuration="48.640602464s" podCreationTimestamp="2026-04-17 23:46:19 +0000 UTC" firstStartedPulling="2026-04-17 23:46:20.182848021 +0000 UTC m=+23.029561228" lastFinishedPulling="2026-04-17 23:46:52.50046444 +0000 UTC m=+55.347177647" observedRunningTime="2026-04-17 23:47:07.637981329 +0000 UTC m=+70.484694536" watchObservedRunningTime="2026-04-17 23:47:07.640602464 +0000 UTC m=+70.487315671" Apr 17 23:47:07.926929 containerd[1709]: 2026-04-17 23:47:07.802 [INFO][4422] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b" Apr 17 23:47:07.926929 containerd[1709]: 2026-04-17 23:47:07.802 [INFO][4422] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b" iface="eth0" netns="/var/run/netns/cni-2a091374-ae9e-0707-abac-ddfa144f9495" Apr 17 23:47:07.926929 containerd[1709]: 2026-04-17 23:47:07.803 [INFO][4422] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b" iface="eth0" netns="/var/run/netns/cni-2a091374-ae9e-0707-abac-ddfa144f9495" Apr 17 23:47:07.926929 containerd[1709]: 2026-04-17 23:47:07.804 [INFO][4422] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b" iface="eth0" netns="/var/run/netns/cni-2a091374-ae9e-0707-abac-ddfa144f9495" Apr 17 23:47:07.926929 containerd[1709]: 2026-04-17 23:47:07.804 [INFO][4422] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b" Apr 17 23:47:07.926929 containerd[1709]: 2026-04-17 23:47:07.804 [INFO][4422] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b" Apr 17 23:47:07.926929 containerd[1709]: 2026-04-17 23:47:07.887 [INFO][4457] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b" HandleID="k8s-pod-network.c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-goldmane--cccfbd5cf--ql4z8-eth0" Apr 17 23:47:07.926929 containerd[1709]: 2026-04-17 23:47:07.889 [INFO][4457] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:47:07.926929 containerd[1709]: 2026-04-17 23:47:07.890 [INFO][4457] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:47:07.926929 containerd[1709]: 2026-04-17 23:47:07.900 [WARNING][4457] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b" HandleID="k8s-pod-network.c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-goldmane--cccfbd5cf--ql4z8-eth0" Apr 17 23:47:07.926929 containerd[1709]: 2026-04-17 23:47:07.901 [INFO][4457] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b" HandleID="k8s-pod-network.c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-goldmane--cccfbd5cf--ql4z8-eth0" Apr 17 23:47:07.926929 containerd[1709]: 2026-04-17 23:47:07.904 [INFO][4457] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:47:07.926929 containerd[1709]: 2026-04-17 23:47:07.917 [INFO][4422] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b" Apr 17 23:47:07.936692 containerd[1709]: time="2026-04-17T23:47:07.931363445Z" level=info msg="TearDown network for sandbox \"c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b\" successfully" Apr 17 23:47:07.940009 systemd[1]: run-netns-cni\x2d2a091374\x2dae9e\x2d0707\x2dabac\x2dddfa144f9495.mount: Deactivated successfully. Apr 17 23:47:07.941993 containerd[1709]: time="2026-04-17T23:47:07.941843281Z" level=info msg="StopPodSandbox for \"c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b\" returns successfully" Apr 17 23:47:07.950462 containerd[1709]: time="2026-04-17T23:47:07.950364992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-ql4z8,Uid:8a1318ab-ffd2-447a-b894-d520f5a1dd65,Namespace:calico-system,Attempt:1,}" Apr 17 23:47:07.969984 containerd[1709]: 2026-04-17 23:47:07.805 [INFO][4433] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e" Apr 17 23:47:07.969984 containerd[1709]: 2026-04-17 23:47:07.808 [INFO][4433] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e" iface="eth0" netns="/var/run/netns/cni-6e1eadf2-33b0-756a-3fed-77e72484372a" Apr 17 23:47:07.969984 containerd[1709]: 2026-04-17 23:47:07.808 [INFO][4433] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e" iface="eth0" netns="/var/run/netns/cni-6e1eadf2-33b0-756a-3fed-77e72484372a" Apr 17 23:47:07.969984 containerd[1709]: 2026-04-17 23:47:07.809 [INFO][4433] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e" iface="eth0" netns="/var/run/netns/cni-6e1eadf2-33b0-756a-3fed-77e72484372a" Apr 17 23:47:07.969984 containerd[1709]: 2026-04-17 23:47:07.809 [INFO][4433] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e" Apr 17 23:47:07.969984 containerd[1709]: 2026-04-17 23:47:07.809 [INFO][4433] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e" Apr 17 23:47:07.969984 containerd[1709]: 2026-04-17 23:47:07.928 [INFO][4462] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e" HandleID="k8s-pod-network.cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-calico--kube--controllers--9d6dc9bbd--6sslf-eth0" Apr 17 23:47:07.969984 containerd[1709]: 2026-04-17 23:47:07.930 [INFO][4462] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:47:07.969984 containerd[1709]: 2026-04-17 23:47:07.930 [INFO][4462] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:47:07.969984 containerd[1709]: 2026-04-17 23:47:07.948 [WARNING][4462] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e" HandleID="k8s-pod-network.cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-calico--kube--controllers--9d6dc9bbd--6sslf-eth0" Apr 17 23:47:07.969984 containerd[1709]: 2026-04-17 23:47:07.948 [INFO][4462] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e" HandleID="k8s-pod-network.cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-calico--kube--controllers--9d6dc9bbd--6sslf-eth0" Apr 17 23:47:07.969984 containerd[1709]: 2026-04-17 23:47:07.952 [INFO][4462] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:47:07.969984 containerd[1709]: 2026-04-17 23:47:07.961 [INFO][4433] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e" Apr 17 23:47:07.976765 systemd[1]: run-netns-cni\x2d6e1eadf2\x2d33b0\x2d756a\x2d3fed\x2d77e72484372a.mount: Deactivated successfully. Apr 17 23:47:07.978488 containerd[1709]: time="2026-04-17T23:47:07.977309443Z" level=info msg="TearDown network for sandbox \"cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e\" successfully" Apr 17 23:47:07.978488 containerd[1709]: time="2026-04-17T23:47:07.977355343Z" level=info msg="StopPodSandbox for \"cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e\" returns successfully" Apr 17 23:47:07.984585 containerd[1709]: time="2026-04-17T23:47:07.984449735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9d6dc9bbd-6sslf,Uid:5f78a19d-36c0-4d0e-b651-34be58a4bc17,Namespace:calico-system,Attempt:1,}" Apr 17 23:47:08.014429 containerd[1709]: 2026-04-17 23:47:07.838 [INFO][4408] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7" Apr 17 23:47:08.014429 containerd[1709]: 2026-04-17 23:47:07.838 [INFO][4408] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7" iface="eth0" netns="/var/run/netns/cni-d1e7f19f-c519-b302-e581-dd5b583b06d4" Apr 17 23:47:08.014429 containerd[1709]: 2026-04-17 23:47:07.839 [INFO][4408] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7" iface="eth0" netns="/var/run/netns/cni-d1e7f19f-c519-b302-e581-dd5b583b06d4" Apr 17 23:47:08.014429 containerd[1709]: 2026-04-17 23:47:07.841 [INFO][4408] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7" iface="eth0" netns="/var/run/netns/cni-d1e7f19f-c519-b302-e581-dd5b583b06d4" Apr 17 23:47:08.014429 containerd[1709]: 2026-04-17 23:47:07.841 [INFO][4408] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7" Apr 17 23:47:08.014429 containerd[1709]: 2026-04-17 23:47:07.841 [INFO][4408] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7" Apr 17 23:47:08.014429 containerd[1709]: 2026-04-17 23:47:07.986 [INFO][4477] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7" HandleID="k8s-pod-network.a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--wnsrc-eth0" Apr 17 23:47:08.014429 containerd[1709]: 2026-04-17 23:47:07.987 [INFO][4477] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:47:08.014429 containerd[1709]: 2026-04-17 23:47:07.987 [INFO][4477] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:47:08.014429 containerd[1709]: 2026-04-17 23:47:08.007 [WARNING][4477] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7" HandleID="k8s-pod-network.a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--wnsrc-eth0" Apr 17 23:47:08.014429 containerd[1709]: 2026-04-17 23:47:08.007 [INFO][4477] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7" HandleID="k8s-pod-network.a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--wnsrc-eth0" Apr 17 23:47:08.014429 containerd[1709]: 2026-04-17 23:47:08.009 [INFO][4477] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:47:08.014429 containerd[1709]: 2026-04-17 23:47:08.011 [INFO][4408] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7" Apr 17 23:47:08.022053 containerd[1709]: time="2026-04-17T23:47:08.014589427Z" level=info msg="TearDown network for sandbox \"a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7\" successfully" Apr 17 23:47:08.022053 containerd[1709]: time="2026-04-17T23:47:08.014622528Z" level=info msg="StopPodSandbox for \"a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7\" returns successfully" Apr 17 23:47:08.020005 systemd[1]: run-netns-cni\x2dd1e7f19f\x2dc519\x2db302\x2de581\x2ddd5b583b06d4.mount: Deactivated successfully. Apr 17 23:47:08.026973 containerd[1709]: time="2026-04-17T23:47:08.026020476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f7ccb55fc-wnsrc,Uid:d0a502cd-d376-426a-82a8-7e44f6c46407,Namespace:calico-system,Attempt:1,}" Apr 17 23:47:08.041613 containerd[1709]: 2026-04-17 23:47:07.854 [INFO][4387] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff" Apr 17 23:47:08.041613 containerd[1709]: 2026-04-17 23:47:07.854 [INFO][4387] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff" iface="eth0" netns="/var/run/netns/cni-7b862e3e-116d-2f5f-5913-42a2cc4e07b5" Apr 17 23:47:08.041613 containerd[1709]: 2026-04-17 23:47:07.854 [INFO][4387] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff" iface="eth0" netns="/var/run/netns/cni-7b862e3e-116d-2f5f-5913-42a2cc4e07b5" Apr 17 23:47:08.041613 containerd[1709]: 2026-04-17 23:47:07.855 [INFO][4387] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff" iface="eth0" netns="/var/run/netns/cni-7b862e3e-116d-2f5f-5913-42a2cc4e07b5" Apr 17 23:47:08.041613 containerd[1709]: 2026-04-17 23:47:07.856 [INFO][4387] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff" Apr 17 23:47:08.041613 containerd[1709]: 2026-04-17 23:47:07.856 [INFO][4387] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff" Apr 17 23:47:08.041613 containerd[1709]: 2026-04-17 23:47:08.016 [INFO][4483] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff" HandleID="k8s-pod-network.51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--dvppd-eth0" Apr 17 23:47:08.041613 containerd[1709]: 2026-04-17 23:47:08.020 [INFO][4483] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:47:08.041613 containerd[1709]: 2026-04-17 23:47:08.020 [INFO][4483] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:47:08.041613 containerd[1709]: 2026-04-17 23:47:08.033 [WARNING][4483] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff" HandleID="k8s-pod-network.51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--dvppd-eth0" Apr 17 23:47:08.041613 containerd[1709]: 2026-04-17 23:47:08.033 [INFO][4483] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff" HandleID="k8s-pod-network.51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--dvppd-eth0" Apr 17 23:47:08.041613 containerd[1709]: 2026-04-17 23:47:08.035 [INFO][4483] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:47:08.041613 containerd[1709]: 2026-04-17 23:47:08.039 [INFO][4387] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff" Apr 17 23:47:08.046010 containerd[1709]: time="2026-04-17T23:47:08.044137412Z" level=info msg="TearDown network for sandbox \"51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff\" successfully" Apr 17 23:47:08.046010 containerd[1709]: time="2026-04-17T23:47:08.044672919Z" level=info msg="StopPodSandbox for \"51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff\" returns successfully" Apr 17 23:47:08.047437 systemd[1]: run-netns-cni\x2d7b862e3e\x2d116d\x2d2f5f\x2d5913\x2d42a2cc4e07b5.mount: Deactivated successfully. Apr 17 23:47:08.053986 containerd[1709]: time="2026-04-17T23:47:08.053948539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f7ccb55fc-dvppd,Uid:7034716d-ae1e-4a07-9d47-901268c9b69a,Namespace:calico-system,Attempt:1,}" Apr 17 23:47:08.070502 containerd[1709]: 2026-04-17 23:47:07.837 [INFO][4416] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc" Apr 17 23:47:08.070502 containerd[1709]: 2026-04-17 23:47:07.837 [INFO][4416] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc" iface="eth0" netns="/var/run/netns/cni-25c6a46b-aad3-c320-6b5b-d34d6bfebd2a" Apr 17 23:47:08.070502 containerd[1709]: 2026-04-17 23:47:07.838 [INFO][4416] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc" iface="eth0" netns="/var/run/netns/cni-25c6a46b-aad3-c320-6b5b-d34d6bfebd2a" Apr 17 23:47:08.070502 containerd[1709]: 2026-04-17 23:47:07.839 [INFO][4416] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc" iface="eth0" netns="/var/run/netns/cni-25c6a46b-aad3-c320-6b5b-d34d6bfebd2a" Apr 17 23:47:08.070502 containerd[1709]: 2026-04-17 23:47:07.839 [INFO][4416] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc" Apr 17 23:47:08.070502 containerd[1709]: 2026-04-17 23:47:07.840 [INFO][4416] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc" Apr 17 23:47:08.070502 containerd[1709]: 2026-04-17 23:47:08.020 [INFO][4475] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc" HandleID="k8s-pod-network.b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-whisker--f6b6dc666--6z9zk-eth0" Apr 17 23:47:08.070502 containerd[1709]: 2026-04-17 23:47:08.020 [INFO][4475] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:47:08.070502 containerd[1709]: 2026-04-17 23:47:08.035 [INFO][4475] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:47:08.070502 containerd[1709]: 2026-04-17 23:47:08.054 [WARNING][4475] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc" HandleID="k8s-pod-network.b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-whisker--f6b6dc666--6z9zk-eth0" Apr 17 23:47:08.070502 containerd[1709]: 2026-04-17 23:47:08.054 [INFO][4475] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc" HandleID="k8s-pod-network.b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-whisker--f6b6dc666--6z9zk-eth0" Apr 17 23:47:08.070502 containerd[1709]: 2026-04-17 23:47:08.057 [INFO][4475] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:47:08.070502 containerd[1709]: 2026-04-17 23:47:08.064 [INFO][4416] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc" Apr 17 23:47:08.071574 containerd[1709]: time="2026-04-17T23:47:08.070896060Z" level=info msg="TearDown network for sandbox \"b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc\" successfully" Apr 17 23:47:08.071574 containerd[1709]: time="2026-04-17T23:47:08.070941260Z" level=info msg="StopPodSandbox for \"b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc\" returns successfully" Apr 17 23:47:08.076467 systemd[1]: run-netns-cni\x2d25c6a46b\x2daad3\x2dc320\x2d6b5b\x2dd34d6bfebd2a.mount: Deactivated successfully. Apr 17 23:47:08.195829 kubelet[3225]: I0417 23:47:08.195341 3225 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/43b4c361-5e57-4a81-a9ff-79904f9465e0-nginx-config\") pod \"43b4c361-5e57-4a81-a9ff-79904f9465e0\" (UID: \"43b4c361-5e57-4a81-a9ff-79904f9465e0\") " Apr 17 23:47:08.195829 kubelet[3225]: I0417 23:47:08.195402 3225 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5q6f\" (UniqueName: \"kubernetes.io/projected/43b4c361-5e57-4a81-a9ff-79904f9465e0-kube-api-access-v5q6f\") pod \"43b4c361-5e57-4a81-a9ff-79904f9465e0\" (UID: \"43b4c361-5e57-4a81-a9ff-79904f9465e0\") " Apr 17 23:47:08.195829 kubelet[3225]: I0417 23:47:08.195446 3225 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/43b4c361-5e57-4a81-a9ff-79904f9465e0-whisker-backend-key-pair\") pod \"43b4c361-5e57-4a81-a9ff-79904f9465e0\" (UID: \"43b4c361-5e57-4a81-a9ff-79904f9465e0\") " Apr 17 23:47:08.195829 kubelet[3225]: I0417 23:47:08.195481 3225 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43b4c361-5e57-4a81-a9ff-79904f9465e0-whisker-ca-bundle\") pod \"43b4c361-5e57-4a81-a9ff-79904f9465e0\" (UID: \"43b4c361-5e57-4a81-a9ff-79904f9465e0\") " Apr 17 23:47:08.197550 kubelet[3225]: I0417 23:47:08.197510 3225 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43b4c361-5e57-4a81-a9ff-79904f9465e0-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "43b4c361-5e57-4a81-a9ff-79904f9465e0" (UID: "43b4c361-5e57-4a81-a9ff-79904f9465e0"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:47:08.205728 kubelet[3225]: I0417 23:47:08.204194 3225 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43b4c361-5e57-4a81-a9ff-79904f9465e0-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "43b4c361-5e57-4a81-a9ff-79904f9465e0" (UID: "43b4c361-5e57-4a81-a9ff-79904f9465e0"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:47:08.209409 kubelet[3225]: I0417 23:47:08.209364 3225 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43b4c361-5e57-4a81-a9ff-79904f9465e0-kube-api-access-v5q6f" (OuterVolumeSpecName: "kube-api-access-v5q6f") pod "43b4c361-5e57-4a81-a9ff-79904f9465e0" (UID: "43b4c361-5e57-4a81-a9ff-79904f9465e0"). InnerVolumeSpecName "kube-api-access-v5q6f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 23:47:08.209724 kubelet[3225]: I0417 23:47:08.209680 3225 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43b4c361-5e57-4a81-a9ff-79904f9465e0-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "43b4c361-5e57-4a81-a9ff-79904f9465e0" (UID: "43b4c361-5e57-4a81-a9ff-79904f9465e0"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 23:47:08.297318 kubelet[3225]: I0417 23:47:08.297273 3225 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/43b4c361-5e57-4a81-a9ff-79904f9465e0-nginx-config\") on node \"ci-4081.3.6-n-7251cc3c8a\" DevicePath \"\"" Apr 17 23:47:08.298865 kubelet[3225]: I0417 23:47:08.298775 3225 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v5q6f\" (UniqueName: \"kubernetes.io/projected/43b4c361-5e57-4a81-a9ff-79904f9465e0-kube-api-access-v5q6f\") on node \"ci-4081.3.6-n-7251cc3c8a\" DevicePath \"\"" Apr 17 23:47:08.298865 kubelet[3225]: I0417 23:47:08.298824 3225 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/43b4c361-5e57-4a81-a9ff-79904f9465e0-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-7251cc3c8a\" DevicePath \"\"" Apr 17 23:47:08.298865 kubelet[3225]: I0417 23:47:08.298840 3225 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43b4c361-5e57-4a81-a9ff-79904f9465e0-whisker-ca-bundle\") on node \"ci-4081.3.6-n-7251cc3c8a\" DevicePath \"\"" Apr 17 23:47:08.360131 systemd-networkd[1329]: cali7b4a9b097b1: Link UP Apr 17 23:47:08.362949 systemd-networkd[1329]: cali7b4a9b097b1: Gained carrier Apr 17 23:47:08.365276 systemd-networkd[1329]: calic0915fbd163: Link UP Apr 17 23:47:08.366163 systemd-networkd[1329]: calic0915fbd163: Gained carrier Apr 17 23:47:08.395579 containerd[1709]: 2026-04-17 23:47:08.142 [ERROR][4515] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:47:08.395579 containerd[1709]: 2026-04-17 23:47:08.169 [INFO][4515] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--7251cc3c8a-k8s-calico--kube--controllers--9d6dc9bbd--6sslf-eth0 calico-kube-controllers-9d6dc9bbd- calico-system 5f78a19d-36c0-4d0e-b651-34be58a4bc17 951 0 2026-04-17 23:46:19 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:9d6dc9bbd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-7251cc3c8a calico-kube-controllers-9d6dc9bbd-6sslf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic0915fbd163 [] [] }} ContainerID="465598e4b02ab4743bf9dbe5a9ce49c4615dfc8fa204d0d99bf2906fbd6ffd4e" Namespace="calico-system" Pod="calico-kube-controllers-9d6dc9bbd-6sslf" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-calico--kube--controllers--9d6dc9bbd--6sslf-" Apr 17 23:47:08.395579 containerd[1709]: 2026-04-17 23:47:08.170 [INFO][4515] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="465598e4b02ab4743bf9dbe5a9ce49c4615dfc8fa204d0d99bf2906fbd6ffd4e" Namespace="calico-system" Pod="calico-kube-controllers-9d6dc9bbd-6sslf" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-calico--kube--controllers--9d6dc9bbd--6sslf-eth0" Apr 17 23:47:08.395579 containerd[1709]: 2026-04-17 23:47:08.254 [INFO][4553] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="465598e4b02ab4743bf9dbe5a9ce49c4615dfc8fa204d0d99bf2906fbd6ffd4e" HandleID="k8s-pod-network.465598e4b02ab4743bf9dbe5a9ce49c4615dfc8fa204d0d99bf2906fbd6ffd4e" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-calico--kube--controllers--9d6dc9bbd--6sslf-eth0" Apr 17 23:47:08.395579 containerd[1709]: 2026-04-17 23:47:08.272 [INFO][4553] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="465598e4b02ab4743bf9dbe5a9ce49c4615dfc8fa204d0d99bf2906fbd6ffd4e" HandleID="k8s-pod-network.465598e4b02ab4743bf9dbe5a9ce49c4615dfc8fa204d0d99bf2906fbd6ffd4e" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-calico--kube--controllers--9d6dc9bbd--6sslf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fe90), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-7251cc3c8a", "pod":"calico-kube-controllers-9d6dc9bbd-6sslf", "timestamp":"2026-04-17 23:47:08.254527848 +0000 UTC"}, Hostname:"ci-4081.3.6-n-7251cc3c8a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00035f340)} Apr 17 23:47:08.395579 containerd[1709]: 2026-04-17 23:47:08.273 [INFO][4553] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:47:08.395579 containerd[1709]: 2026-04-17 23:47:08.276 [INFO][4553] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:47:08.395579 containerd[1709]: 2026-04-17 23:47:08.276 [INFO][4553] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-7251cc3c8a' Apr 17 23:47:08.395579 containerd[1709]: 2026-04-17 23:47:08.291 [INFO][4553] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.465598e4b02ab4743bf9dbe5a9ce49c4615dfc8fa204d0d99bf2906fbd6ffd4e" host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:08.395579 containerd[1709]: 2026-04-17 23:47:08.316 [INFO][4553] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:08.395579 containerd[1709]: 2026-04-17 23:47:08.324 [INFO][4553] ipam/ipam.go 526: Trying affinity for 192.168.29.128/26 host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:08.395579 containerd[1709]: 2026-04-17 23:47:08.326 [INFO][4553] ipam/ipam.go 160: Attempting to load block cidr=192.168.29.128/26 host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:08.395579 containerd[1709]: 2026-04-17 23:47:08.328 [INFO][4553] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.29.128/26 host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:08.395579 containerd[1709]: 2026-04-17 23:47:08.328 [INFO][4553] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.29.128/26 handle="k8s-pod-network.465598e4b02ab4743bf9dbe5a9ce49c4615dfc8fa204d0d99bf2906fbd6ffd4e" host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:08.395579 containerd[1709]: 2026-04-17 23:47:08.329 [INFO][4553] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.465598e4b02ab4743bf9dbe5a9ce49c4615dfc8fa204d0d99bf2906fbd6ffd4e Apr 17 23:47:08.395579 containerd[1709]: 2026-04-17 23:47:08.333 [INFO][4553] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.29.128/26 handle="k8s-pod-network.465598e4b02ab4743bf9dbe5a9ce49c4615dfc8fa204d0d99bf2906fbd6ffd4e" host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:08.395579 containerd[1709]: 2026-04-17 23:47:08.338 [INFO][4553] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.29.130/26] block=192.168.29.128/26 handle="k8s-pod-network.465598e4b02ab4743bf9dbe5a9ce49c4615dfc8fa204d0d99bf2906fbd6ffd4e" host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:08.395579 containerd[1709]: 2026-04-17 23:47:08.338 [INFO][4553] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.29.130/26] handle="k8s-pod-network.465598e4b02ab4743bf9dbe5a9ce49c4615dfc8fa204d0d99bf2906fbd6ffd4e" host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:08.395579 containerd[1709]: 2026-04-17 23:47:08.339 [INFO][4553] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:47:08.395579 containerd[1709]: 2026-04-17 23:47:08.339 [INFO][4553] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.29.130/26] IPv6=[] ContainerID="465598e4b02ab4743bf9dbe5a9ce49c4615dfc8fa204d0d99bf2906fbd6ffd4e" HandleID="k8s-pod-network.465598e4b02ab4743bf9dbe5a9ce49c4615dfc8fa204d0d99bf2906fbd6ffd4e" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-calico--kube--controllers--9d6dc9bbd--6sslf-eth0" Apr 17 23:47:08.397934 containerd[1709]: 2026-04-17 23:47:08.341 [INFO][4515] cni-plugin/k8s.go 418: Populated endpoint ContainerID="465598e4b02ab4743bf9dbe5a9ce49c4615dfc8fa204d0d99bf2906fbd6ffd4e" Namespace="calico-system" Pod="calico-kube-controllers-9d6dc9bbd-6sslf" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-calico--kube--controllers--9d6dc9bbd--6sslf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--7251cc3c8a-k8s-calico--kube--controllers--9d6dc9bbd--6sslf-eth0", GenerateName:"calico-kube-controllers-9d6dc9bbd-", Namespace:"calico-system", SelfLink:"", UID:"5f78a19d-36c0-4d0e-b651-34be58a4bc17", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9d6dc9bbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-7251cc3c8a", ContainerID:"", Pod:"calico-kube-controllers-9d6dc9bbd-6sslf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.29.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic0915fbd163", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:47:08.397934 containerd[1709]: 2026-04-17 23:47:08.341 [INFO][4515] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.130/32] ContainerID="465598e4b02ab4743bf9dbe5a9ce49c4615dfc8fa204d0d99bf2906fbd6ffd4e" Namespace="calico-system" Pod="calico-kube-controllers-9d6dc9bbd-6sslf" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-calico--kube--controllers--9d6dc9bbd--6sslf-eth0" Apr 17 23:47:08.397934 containerd[1709]: 2026-04-17 23:47:08.341 [INFO][4515] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic0915fbd163 ContainerID="465598e4b02ab4743bf9dbe5a9ce49c4615dfc8fa204d0d99bf2906fbd6ffd4e" Namespace="calico-system" Pod="calico-kube-controllers-9d6dc9bbd-6sslf" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-calico--kube--controllers--9d6dc9bbd--6sslf-eth0" Apr 17 23:47:08.397934 containerd[1709]: 2026-04-17 23:47:08.365 [INFO][4515] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="465598e4b02ab4743bf9dbe5a9ce49c4615dfc8fa204d0d99bf2906fbd6ffd4e" Namespace="calico-system" Pod="calico-kube-controllers-9d6dc9bbd-6sslf" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-calico--kube--controllers--9d6dc9bbd--6sslf-eth0" Apr 17 23:47:08.397934 containerd[1709]: 2026-04-17 23:47:08.365 [INFO][4515] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="465598e4b02ab4743bf9dbe5a9ce49c4615dfc8fa204d0d99bf2906fbd6ffd4e" Namespace="calico-system" Pod="calico-kube-controllers-9d6dc9bbd-6sslf" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-calico--kube--controllers--9d6dc9bbd--6sslf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--7251cc3c8a-k8s-calico--kube--controllers--9d6dc9bbd--6sslf-eth0", GenerateName:"calico-kube-controllers-9d6dc9bbd-", Namespace:"calico-system", SelfLink:"", UID:"5f78a19d-36c0-4d0e-b651-34be58a4bc17", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9d6dc9bbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-7251cc3c8a", ContainerID:"465598e4b02ab4743bf9dbe5a9ce49c4615dfc8fa204d0d99bf2906fbd6ffd4e", Pod:"calico-kube-controllers-9d6dc9bbd-6sslf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.29.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic0915fbd163", MAC:"0a:69:3f:6e:07:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:47:08.397934 containerd[1709]: 2026-04-17 23:47:08.391 [INFO][4515] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="465598e4b02ab4743bf9dbe5a9ce49c4615dfc8fa204d0d99bf2906fbd6ffd4e" Namespace="calico-system" Pod="calico-kube-controllers-9d6dc9bbd-6sslf" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-calico--kube--controllers--9d6dc9bbd--6sslf-eth0" Apr 17 23:47:08.399196 containerd[1709]: 2026-04-17 23:47:08.083 [ERROR][4505] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:47:08.399196 containerd[1709]: 2026-04-17 23:47:08.108 [INFO][4505] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--7251cc3c8a-k8s-goldmane--cccfbd5cf--ql4z8-eth0 goldmane-cccfbd5cf- calico-system 8a1318ab-ffd2-447a-b894-d520f5a1dd65 950 0 2026-04-17 23:46:19 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-7251cc3c8a goldmane-cccfbd5cf-ql4z8 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7b4a9b097b1 [] [] }} ContainerID="86f86b092192c99db0ff99a99a8c04e4d97353a905ca0cc143d7e4835c0a07ef" Namespace="calico-system" Pod="goldmane-cccfbd5cf-ql4z8" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-goldmane--cccfbd5cf--ql4z8-" Apr 17 23:47:08.399196 containerd[1709]: 2026-04-17 23:47:08.108 [INFO][4505] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="86f86b092192c99db0ff99a99a8c04e4d97353a905ca0cc143d7e4835c0a07ef" Namespace="calico-system" Pod="goldmane-cccfbd5cf-ql4z8" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-goldmane--cccfbd5cf--ql4z8-eth0" Apr 17 23:47:08.399196 containerd[1709]: 2026-04-17 23:47:08.171 [INFO][4532] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="86f86b092192c99db0ff99a99a8c04e4d97353a905ca0cc143d7e4835c0a07ef" HandleID="k8s-pod-network.86f86b092192c99db0ff99a99a8c04e4d97353a905ca0cc143d7e4835c0a07ef" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-goldmane--cccfbd5cf--ql4z8-eth0" Apr 17 23:47:08.399196 containerd[1709]: 2026-04-17 23:47:08.186 [INFO][4532] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="86f86b092192c99db0ff99a99a8c04e4d97353a905ca0cc143d7e4835c0a07ef" HandleID="k8s-pod-network.86f86b092192c99db0ff99a99a8c04e4d97353a905ca0cc143d7e4835c0a07ef" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-goldmane--cccfbd5cf--ql4z8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fde80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-7251cc3c8a", "pod":"goldmane-cccfbd5cf-ql4z8", "timestamp":"2026-04-17 23:47:08.171784372 +0000 UTC"}, Hostname:"ci-4081.3.6-n-7251cc3c8a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000433080)} Apr 17 23:47:08.399196 containerd[1709]: 2026-04-17 23:47:08.186 [INFO][4532] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:47:08.399196 containerd[1709]: 2026-04-17 23:47:08.186 [INFO][4532] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:47:08.399196 containerd[1709]: 2026-04-17 23:47:08.186 [INFO][4532] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-7251cc3c8a' Apr 17 23:47:08.399196 containerd[1709]: 2026-04-17 23:47:08.193 [INFO][4532] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.86f86b092192c99db0ff99a99a8c04e4d97353a905ca0cc143d7e4835c0a07ef" host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:08.399196 containerd[1709]: 2026-04-17 23:47:08.216 [INFO][4532] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:08.399196 containerd[1709]: 2026-04-17 23:47:08.228 [INFO][4532] ipam/ipam.go 526: Trying affinity for 192.168.29.128/26 host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:08.399196 containerd[1709]: 2026-04-17 23:47:08.231 [INFO][4532] ipam/ipam.go 160: Attempting to load block cidr=192.168.29.128/26 host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:08.399196 containerd[1709]: 2026-04-17 23:47:08.238 [INFO][4532] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.29.128/26 host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:08.399196 containerd[1709]: 2026-04-17 23:47:08.238 [INFO][4532] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.29.128/26 handle="k8s-pod-network.86f86b092192c99db0ff99a99a8c04e4d97353a905ca0cc143d7e4835c0a07ef" host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:08.399196 containerd[1709]: 2026-04-17 23:47:08.243 [INFO][4532] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.86f86b092192c99db0ff99a99a8c04e4d97353a905ca0cc143d7e4835c0a07ef Apr 17 23:47:08.399196 containerd[1709]: 2026-04-17 23:47:08.252 [INFO][4532] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.29.128/26 handle="k8s-pod-network.86f86b092192c99db0ff99a99a8c04e4d97353a905ca0cc143d7e4835c0a07ef" host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:08.399196 containerd[1709]: 2026-04-17 23:47:08.274 [INFO][4532] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.29.129/26] block=192.168.29.128/26 handle="k8s-pod-network.86f86b092192c99db0ff99a99a8c04e4d97353a905ca0cc143d7e4835c0a07ef" host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:08.399196 containerd[1709]: 2026-04-17 23:47:08.274 [INFO][4532] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.29.129/26] handle="k8s-pod-network.86f86b092192c99db0ff99a99a8c04e4d97353a905ca0cc143d7e4835c0a07ef" host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:08.399196 containerd[1709]: 2026-04-17 23:47:08.274 [INFO][4532] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:47:08.399196 containerd[1709]: 2026-04-17 23:47:08.275 [INFO][4532] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.29.129/26] IPv6=[] ContainerID="86f86b092192c99db0ff99a99a8c04e4d97353a905ca0cc143d7e4835c0a07ef" HandleID="k8s-pod-network.86f86b092192c99db0ff99a99a8c04e4d97353a905ca0cc143d7e4835c0a07ef" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-goldmane--cccfbd5cf--ql4z8-eth0" Apr 17 23:47:08.401250 containerd[1709]: 2026-04-17 23:47:08.280 [INFO][4505] cni-plugin/k8s.go 418: Populated endpoint ContainerID="86f86b092192c99db0ff99a99a8c04e4d97353a905ca0cc143d7e4835c0a07ef" Namespace="calico-system" Pod="goldmane-cccfbd5cf-ql4z8" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-goldmane--cccfbd5cf--ql4z8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--7251cc3c8a-k8s-goldmane--cccfbd5cf--ql4z8-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"8a1318ab-ffd2-447a-b894-d520f5a1dd65", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-7251cc3c8a", ContainerID:"", Pod:"goldmane-cccfbd5cf-ql4z8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.29.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7b4a9b097b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:47:08.401250 containerd[1709]: 2026-04-17 23:47:08.280 [INFO][4505] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.129/32] ContainerID="86f86b092192c99db0ff99a99a8c04e4d97353a905ca0cc143d7e4835c0a07ef" Namespace="calico-system" Pod="goldmane-cccfbd5cf-ql4z8" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-goldmane--cccfbd5cf--ql4z8-eth0" Apr 17 23:47:08.401250 containerd[1709]: 2026-04-17 23:47:08.280 [INFO][4505] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7b4a9b097b1 ContainerID="86f86b092192c99db0ff99a99a8c04e4d97353a905ca0cc143d7e4835c0a07ef" Namespace="calico-system" Pod="goldmane-cccfbd5cf-ql4z8" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-goldmane--cccfbd5cf--ql4z8-eth0" Apr 17 23:47:08.401250 containerd[1709]: 2026-04-17 23:47:08.359 [INFO][4505] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="86f86b092192c99db0ff99a99a8c04e4d97353a905ca0cc143d7e4835c0a07ef" Namespace="calico-system" Pod="goldmane-cccfbd5cf-ql4z8" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-goldmane--cccfbd5cf--ql4z8-eth0" Apr 17 23:47:08.401250 containerd[1709]: 2026-04-17 23:47:08.360 [INFO][4505] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="86f86b092192c99db0ff99a99a8c04e4d97353a905ca0cc143d7e4835c0a07ef" Namespace="calico-system" Pod="goldmane-cccfbd5cf-ql4z8" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-goldmane--cccfbd5cf--ql4z8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--7251cc3c8a-k8s-goldmane--cccfbd5cf--ql4z8-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"8a1318ab-ffd2-447a-b894-d520f5a1dd65", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-7251cc3c8a", ContainerID:"86f86b092192c99db0ff99a99a8c04e4d97353a905ca0cc143d7e4835c0a07ef", Pod:"goldmane-cccfbd5cf-ql4z8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.29.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7b4a9b097b1", MAC:"f2:9f:3c:dd:76:6d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:47:08.401250 containerd[1709]: 2026-04-17 23:47:08.393 [INFO][4505] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="86f86b092192c99db0ff99a99a8c04e4d97353a905ca0cc143d7e4835c0a07ef" Namespace="calico-system" Pod="goldmane-cccfbd5cf-ql4z8" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-goldmane--cccfbd5cf--ql4z8-eth0" Apr 17 23:47:08.427497 containerd[1709]: time="2026-04-17T23:47:08.426690887Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:47:08.427497 containerd[1709]: time="2026-04-17T23:47:08.426842789Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:47:08.427497 containerd[1709]: time="2026-04-17T23:47:08.426866789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:47:08.427935 containerd[1709]: time="2026-04-17T23:47:08.427424796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:47:08.433911 containerd[1709]: time="2026-04-17T23:47:08.433627777Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:47:08.433911 containerd[1709]: time="2026-04-17T23:47:08.433701678Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:47:08.433911 containerd[1709]: time="2026-04-17T23:47:08.433759979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:47:08.434760 containerd[1709]: time="2026-04-17T23:47:08.433877280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:47:08.458232 systemd[1]: Started cri-containerd-86f86b092192c99db0ff99a99a8c04e4d97353a905ca0cc143d7e4835c0a07ef.scope - libcontainer container 86f86b092192c99db0ff99a99a8c04e4d97353a905ca0cc143d7e4835c0a07ef. Apr 17 23:47:08.478167 systemd[1]: Started cri-containerd-465598e4b02ab4743bf9dbe5a9ce49c4615dfc8fa204d0d99bf2906fbd6ffd4e.scope - libcontainer container 465598e4b02ab4743bf9dbe5a9ce49c4615dfc8fa204d0d99bf2906fbd6ffd4e. Apr 17 23:47:08.498869 systemd-networkd[1329]: cali684756efbe2: Link UP Apr 17 23:47:08.499175 systemd-networkd[1329]: cali684756efbe2: Gained carrier Apr 17 23:47:08.526541 containerd[1709]: 2026-04-17 23:47:08.196 [ERROR][4526] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:47:08.526541 containerd[1709]: 2026-04-17 23:47:08.236 [INFO][4526] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--wnsrc-eth0 calico-apiserver-5f7ccb55fc- calico-system d0a502cd-d376-426a-82a8-7e44f6c46407 953 0 2026-04-17 23:46:18 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f7ccb55fc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-7251cc3c8a calico-apiserver-5f7ccb55fc-wnsrc eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali684756efbe2 [] [] }} ContainerID="c4325e12f035c800503add42d9d766fb7c3a5d218c60be9ce91496e42d41c4ee" Namespace="calico-system" Pod="calico-apiserver-5f7ccb55fc-wnsrc" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--wnsrc-" Apr 17 23:47:08.526541 containerd[1709]: 2026-04-17 23:47:08.236 [INFO][4526] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c4325e12f035c800503add42d9d766fb7c3a5d218c60be9ce91496e42d41c4ee" Namespace="calico-system" Pod="calico-apiserver-5f7ccb55fc-wnsrc" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--wnsrc-eth0" Apr 17 23:47:08.526541 containerd[1709]: 2026-04-17 23:47:08.295 [INFO][4567] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c4325e12f035c800503add42d9d766fb7c3a5d218c60be9ce91496e42d41c4ee" HandleID="k8s-pod-network.c4325e12f035c800503add42d9d766fb7c3a5d218c60be9ce91496e42d41c4ee" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--wnsrc-eth0" Apr 17 23:47:08.526541 containerd[1709]: 2026-04-17 23:47:08.307 [INFO][4567] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c4325e12f035c800503add42d9d766fb7c3a5d218c60be9ce91496e42d41c4ee" HandleID="k8s-pod-network.c4325e12f035c800503add42d9d766fb7c3a5d218c60be9ce91496e42d41c4ee" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--wnsrc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fd860), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-7251cc3c8a", "pod":"calico-apiserver-5f7ccb55fc-wnsrc", "timestamp":"2026-04-17 23:47:08.295224777 +0000 UTC"}, Hostname:"ci-4081.3.6-n-7251cc3c8a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002914a0)} Apr 17 23:47:08.526541 containerd[1709]: 2026-04-17 23:47:08.307 [INFO][4567] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:47:08.526541 containerd[1709]: 2026-04-17 23:47:08.339 [INFO][4567] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:47:08.526541 containerd[1709]: 2026-04-17 23:47:08.339 [INFO][4567] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-7251cc3c8a' Apr 17 23:47:08.526541 containerd[1709]: 2026-04-17 23:47:08.396 [INFO][4567] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c4325e12f035c800503add42d9d766fb7c3a5d218c60be9ce91496e42d41c4ee" host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:08.526541 containerd[1709]: 2026-04-17 23:47:08.442 [INFO][4567] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:08.526541 containerd[1709]: 2026-04-17 23:47:08.450 [INFO][4567] ipam/ipam.go 526: Trying affinity for 192.168.29.128/26 host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:08.526541 containerd[1709]: 2026-04-17 23:47:08.454 [INFO][4567] ipam/ipam.go 160: Attempting to load block cidr=192.168.29.128/26 host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:08.526541 containerd[1709]: 2026-04-17 23:47:08.463 [INFO][4567] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.29.128/26 host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:08.526541 containerd[1709]: 2026-04-17 23:47:08.463 [INFO][4567] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.29.128/26 handle="k8s-pod-network.c4325e12f035c800503add42d9d766fb7c3a5d218c60be9ce91496e42d41c4ee" host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:08.526541 containerd[1709]: 2026-04-17 23:47:08.466 [INFO][4567] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c4325e12f035c800503add42d9d766fb7c3a5d218c60be9ce91496e42d41c4ee Apr 17 23:47:08.526541 containerd[1709]: 2026-04-17 23:47:08.475 [INFO][4567] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.29.128/26 handle="k8s-pod-network.c4325e12f035c800503add42d9d766fb7c3a5d218c60be9ce91496e42d41c4ee" host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:08.526541 containerd[1709]: 2026-04-17 23:47:08.488 [INFO][4567] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.29.131/26] block=192.168.29.128/26 handle="k8s-pod-network.c4325e12f035c800503add42d9d766fb7c3a5d218c60be9ce91496e42d41c4ee" host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:08.526541 containerd[1709]: 2026-04-17 23:47:08.488 [INFO][4567] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.29.131/26] handle="k8s-pod-network.c4325e12f035c800503add42d9d766fb7c3a5d218c60be9ce91496e42d41c4ee" host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:08.526541 containerd[1709]: 2026-04-17 23:47:08.488 [INFO][4567] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:47:08.526541 containerd[1709]: 2026-04-17 23:47:08.488 [INFO][4567] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.29.131/26] IPv6=[] ContainerID="c4325e12f035c800503add42d9d766fb7c3a5d218c60be9ce91496e42d41c4ee" HandleID="k8s-pod-network.c4325e12f035c800503add42d9d766fb7c3a5d218c60be9ce91496e42d41c4ee" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--wnsrc-eth0" Apr 17 23:47:08.528233 containerd[1709]: 2026-04-17 23:47:08.491 [INFO][4526] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c4325e12f035c800503add42d9d766fb7c3a5d218c60be9ce91496e42d41c4ee" Namespace="calico-system" Pod="calico-apiserver-5f7ccb55fc-wnsrc" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--wnsrc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--wnsrc-eth0", GenerateName:"calico-apiserver-5f7ccb55fc-", Namespace:"calico-system", SelfLink:"", UID:"d0a502cd-d376-426a-82a8-7e44f6c46407", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f7ccb55fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-7251cc3c8a", ContainerID:"", Pod:"calico-apiserver-5f7ccb55fc-wnsrc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali684756efbe2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:47:08.528233 containerd[1709]: 2026-04-17 23:47:08.491 [INFO][4526] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.131/32] ContainerID="c4325e12f035c800503add42d9d766fb7c3a5d218c60be9ce91496e42d41c4ee" Namespace="calico-system" Pod="calico-apiserver-5f7ccb55fc-wnsrc" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--wnsrc-eth0" Apr 17 23:47:08.528233 containerd[1709]: 2026-04-17 23:47:08.491 [INFO][4526] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali684756efbe2 ContainerID="c4325e12f035c800503add42d9d766fb7c3a5d218c60be9ce91496e42d41c4ee" Namespace="calico-system" Pod="calico-apiserver-5f7ccb55fc-wnsrc" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--wnsrc-eth0" Apr 17 23:47:08.528233 containerd[1709]: 2026-04-17 23:47:08.501 [INFO][4526] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c4325e12f035c800503add42d9d766fb7c3a5d218c60be9ce91496e42d41c4ee" Namespace="calico-system" Pod="calico-apiserver-5f7ccb55fc-wnsrc" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--wnsrc-eth0" Apr 17 23:47:08.528233 containerd[1709]: 2026-04-17 23:47:08.503 [INFO][4526] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c4325e12f035c800503add42d9d766fb7c3a5d218c60be9ce91496e42d41c4ee" Namespace="calico-system" Pod="calico-apiserver-5f7ccb55fc-wnsrc" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--wnsrc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--wnsrc-eth0", GenerateName:"calico-apiserver-5f7ccb55fc-", Namespace:"calico-system", SelfLink:"", UID:"d0a502cd-d376-426a-82a8-7e44f6c46407", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f7ccb55fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-7251cc3c8a", ContainerID:"c4325e12f035c800503add42d9d766fb7c3a5d218c60be9ce91496e42d41c4ee", Pod:"calico-apiserver-5f7ccb55fc-wnsrc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali684756efbe2", MAC:"66:43:b9:1b:cb:98", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:47:08.528233 containerd[1709]: 2026-04-17 23:47:08.524 [INFO][4526] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c4325e12f035c800503add42d9d766fb7c3a5d218c60be9ce91496e42d41c4ee" Namespace="calico-system" Pod="calico-apiserver-5f7ccb55fc-wnsrc" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--wnsrc-eth0" Apr 17 23:47:08.566277 containerd[1709]: time="2026-04-17T23:47:08.565817596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:47:08.566277 containerd[1709]: time="2026-04-17T23:47:08.565952598Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:47:08.566277 containerd[1709]: time="2026-04-17T23:47:08.566097400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:47:08.566277 containerd[1709]: time="2026-04-17T23:47:08.566235202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:47:08.590917 containerd[1709]: time="2026-04-17T23:47:08.590028111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-ql4z8,Uid:8a1318ab-ffd2-447a-b894-d520f5a1dd65,Namespace:calico-system,Attempt:1,} returns sandbox id \"86f86b092192c99db0ff99a99a8c04e4d97353a905ca0cc143d7e4835c0a07ef\"" Apr 17 23:47:08.595243 containerd[1709]: time="2026-04-17T23:47:08.595052376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 17 23:47:08.602522 systemd[1]: Removed slice kubepods-besteffort-pod43b4c361_5e57_4a81_a9ff_79904f9465e0.slice - libcontainer container kubepods-besteffort-pod43b4c361_5e57_4a81_a9ff_79904f9465e0.slice. Apr 17 23:47:08.620931 systemd[1]: Started cri-containerd-c4325e12f035c800503add42d9d766fb7c3a5d218c60be9ce91496e42d41c4ee.scope - libcontainer container c4325e12f035c800503add42d9d766fb7c3a5d218c60be9ce91496e42d41c4ee. Apr 17 23:47:08.642122 containerd[1709]: time="2026-04-17T23:47:08.640900673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9d6dc9bbd-6sslf,Uid:5f78a19d-36c0-4d0e-b651-34be58a4bc17,Namespace:calico-system,Attempt:1,} returns sandbox id \"465598e4b02ab4743bf9dbe5a9ce49c4615dfc8fa204d0d99bf2906fbd6ffd4e\"" Apr 17 23:47:08.698486 containerd[1709]: time="2026-04-17T23:47:08.698445921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f7ccb55fc-wnsrc,Uid:d0a502cd-d376-426a-82a8-7e44f6c46407,Namespace:calico-system,Attempt:1,} returns sandbox id \"c4325e12f035c800503add42d9d766fb7c3a5d218c60be9ce91496e42d41c4ee\"" Apr 17 23:47:08.755108 systemd-networkd[1329]: calia5f84671acc: Link UP Apr 17 23:47:08.757023 systemd-networkd[1329]: calia5f84671acc: Gained carrier Apr 17 23:47:08.793936 containerd[1709]: 2026-04-17 23:47:08.226 [ERROR][4541] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:47:08.793936 containerd[1709]: 2026-04-17 23:47:08.257 [INFO][4541] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--dvppd-eth0 calico-apiserver-5f7ccb55fc- calico-system 7034716d-ae1e-4a07-9d47-901268c9b69a 954 0 2026-04-17 23:46:18 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f7ccb55fc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-7251cc3c8a calico-apiserver-5f7ccb55fc-dvppd eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calia5f84671acc [] [] }} ContainerID="686264a36cfdc9a37c44efb3560fb627f692c961e28e49d676b8ff0a3450aa45" Namespace="calico-system" Pod="calico-apiserver-5f7ccb55fc-dvppd" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--dvppd-" Apr 17 23:47:08.793936 containerd[1709]: 2026-04-17 23:47:08.257 [INFO][4541] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="686264a36cfdc9a37c44efb3560fb627f692c961e28e49d676b8ff0a3450aa45" Namespace="calico-system" Pod="calico-apiserver-5f7ccb55fc-dvppd" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--dvppd-eth0" Apr 17 23:47:08.793936 containerd[1709]: 2026-04-17 23:47:08.317 [INFO][4576] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="686264a36cfdc9a37c44efb3560fb627f692c961e28e49d676b8ff0a3450aa45" HandleID="k8s-pod-network.686264a36cfdc9a37c44efb3560fb627f692c961e28e49d676b8ff0a3450aa45" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--dvppd-eth0" Apr 17 23:47:08.793936 containerd[1709]: 2026-04-17 23:47:08.324 [INFO][4576] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="686264a36cfdc9a37c44efb3560fb627f692c961e28e49d676b8ff0a3450aa45" HandleID="k8s-pod-network.686264a36cfdc9a37c44efb3560fb627f692c961e28e49d676b8ff0a3450aa45" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--dvppd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004eee0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-7251cc3c8a", "pod":"calico-apiserver-5f7ccb55fc-dvppd", "timestamp":"2026-04-17 23:47:08.317179763 +0000 UTC"}, Hostname:"ci-4081.3.6-n-7251cc3c8a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000112dc0)} Apr 17 23:47:08.793936 containerd[1709]: 2026-04-17 23:47:08.324 [INFO][4576] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:47:08.793936 containerd[1709]: 2026-04-17 23:47:08.489 [INFO][4576] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:47:08.793936 containerd[1709]: 2026-04-17 23:47:08.489 [INFO][4576] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-7251cc3c8a' Apr 17 23:47:08.793936 containerd[1709]: 2026-04-17 23:47:08.492 [INFO][4576] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.686264a36cfdc9a37c44efb3560fb627f692c961e28e49d676b8ff0a3450aa45" host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:08.793936 containerd[1709]: 2026-04-17 23:47:08.581 [INFO][4576] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:08.793936 containerd[1709]: 2026-04-17 23:47:08.601 [INFO][4576] ipam/ipam.go 526: Trying affinity for 192.168.29.128/26 host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:08.793936 containerd[1709]: 2026-04-17 23:47:08.605 [INFO][4576] ipam/ipam.go 160: Attempting to load block cidr=192.168.29.128/26 host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:08.793936 containerd[1709]: 2026-04-17 23:47:08.610 [INFO][4576] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.29.128/26 host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:08.793936 containerd[1709]: 2026-04-17 23:47:08.611 [INFO][4576] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.29.128/26 handle="k8s-pod-network.686264a36cfdc9a37c44efb3560fb627f692c961e28e49d676b8ff0a3450aa45" host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:08.793936 containerd[1709]: 2026-04-17 23:47:08.614 [INFO][4576] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.686264a36cfdc9a37c44efb3560fb627f692c961e28e49d676b8ff0a3450aa45 Apr 17 23:47:08.793936 containerd[1709]: 2026-04-17 23:47:08.649 [INFO][4576] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.29.128/26 handle="k8s-pod-network.686264a36cfdc9a37c44efb3560fb627f692c961e28e49d676b8ff0a3450aa45" host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:08.793936 containerd[1709]: 2026-04-17 23:47:08.742 [INFO][4576] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.29.132/26] block=192.168.29.128/26 handle="k8s-pod-network.686264a36cfdc9a37c44efb3560fb627f692c961e28e49d676b8ff0a3450aa45" host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:08.793936 containerd[1709]: 2026-04-17 23:47:08.742 [INFO][4576] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.29.132/26] handle="k8s-pod-network.686264a36cfdc9a37c44efb3560fb627f692c961e28e49d676b8ff0a3450aa45" host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:08.793936 containerd[1709]: 2026-04-17 23:47:08.742 [INFO][4576] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:47:08.793936 containerd[1709]: 2026-04-17 23:47:08.742 [INFO][4576] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.29.132/26] IPv6=[] ContainerID="686264a36cfdc9a37c44efb3560fb627f692c961e28e49d676b8ff0a3450aa45" HandleID="k8s-pod-network.686264a36cfdc9a37c44efb3560fb627f692c961e28e49d676b8ff0a3450aa45" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--dvppd-eth0" Apr 17 23:47:08.796015 containerd[1709]: 2026-04-17 23:47:08.744 [INFO][4541] cni-plugin/k8s.go 418: Populated endpoint ContainerID="686264a36cfdc9a37c44efb3560fb627f692c961e28e49d676b8ff0a3450aa45" Namespace="calico-system" Pod="calico-apiserver-5f7ccb55fc-dvppd" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--dvppd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--dvppd-eth0", GenerateName:"calico-apiserver-5f7ccb55fc-", Namespace:"calico-system", SelfLink:"", UID:"7034716d-ae1e-4a07-9d47-901268c9b69a", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f7ccb55fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-7251cc3c8a", ContainerID:"", Pod:"calico-apiserver-5f7ccb55fc-dvppd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia5f84671acc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:47:08.796015 containerd[1709]: 2026-04-17 23:47:08.745 [INFO][4541] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.132/32] ContainerID="686264a36cfdc9a37c44efb3560fb627f692c961e28e49d676b8ff0a3450aa45" Namespace="calico-system" Pod="calico-apiserver-5f7ccb55fc-dvppd" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--dvppd-eth0" Apr 17 23:47:08.796015 containerd[1709]: 2026-04-17 23:47:08.745 [INFO][4541] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia5f84671acc ContainerID="686264a36cfdc9a37c44efb3560fb627f692c961e28e49d676b8ff0a3450aa45" Namespace="calico-system" Pod="calico-apiserver-5f7ccb55fc-dvppd" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--dvppd-eth0" Apr 17 23:47:08.796015 containerd[1709]: 2026-04-17 23:47:08.758 [INFO][4541] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="686264a36cfdc9a37c44efb3560fb627f692c961e28e49d676b8ff0a3450aa45" Namespace="calico-system" Pod="calico-apiserver-5f7ccb55fc-dvppd" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--dvppd-eth0" Apr 17 23:47:08.796015 containerd[1709]: 2026-04-17 23:47:08.759 [INFO][4541] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="686264a36cfdc9a37c44efb3560fb627f692c961e28e49d676b8ff0a3450aa45" Namespace="calico-system" Pod="calico-apiserver-5f7ccb55fc-dvppd" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--dvppd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--dvppd-eth0", GenerateName:"calico-apiserver-5f7ccb55fc-", Namespace:"calico-system", SelfLink:"", UID:"7034716d-ae1e-4a07-9d47-901268c9b69a", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f7ccb55fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-7251cc3c8a", ContainerID:"686264a36cfdc9a37c44efb3560fb627f692c961e28e49d676b8ff0a3450aa45", Pod:"calico-apiserver-5f7ccb55fc-dvppd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia5f84671acc", MAC:"ea:33:f9:f3:a2:00", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:47:08.796015 containerd[1709]: 2026-04-17 23:47:08.790 [INFO][4541] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="686264a36cfdc9a37c44efb3560fb627f692c961e28e49d676b8ff0a3450aa45" Namespace="calico-system" Pod="calico-apiserver-5f7ccb55fc-dvppd" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--dvppd-eth0" Apr 17 23:47:08.828860 containerd[1709]: time="2026-04-17T23:47:08.828731815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:47:08.828860 containerd[1709]: time="2026-04-17T23:47:08.828816317Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:47:08.828860 containerd[1709]: time="2026-04-17T23:47:08.828832317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:47:08.829598 containerd[1709]: time="2026-04-17T23:47:08.829544426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:47:08.855954 systemd[1]: Started cri-containerd-686264a36cfdc9a37c44efb3560fb627f692c961e28e49d676b8ff0a3450aa45.scope - libcontainer container 686264a36cfdc9a37c44efb3560fb627f692c961e28e49d676b8ff0a3450aa45. Apr 17 23:47:08.912229 containerd[1709]: time="2026-04-17T23:47:08.912183001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f7ccb55fc-dvppd,Uid:7034716d-ae1e-4a07-9d47-901268c9b69a,Namespace:calico-system,Attempt:1,} returns sandbox id \"686264a36cfdc9a37c44efb3560fb627f692c961e28e49d676b8ff0a3450aa45\"" Apr 17 23:47:08.972612 systemd[1]: Created slice kubepods-besteffort-podb0f8d9bc_898b_4380_a34f_a003afaabd59.slice - libcontainer container kubepods-besteffort-podb0f8d9bc_898b_4380_a34f_a003afaabd59.slice. Apr 17 23:47:09.012644 systemd[1]: var-lib-kubelet-pods-43b4c361\x2d5e57\x2d4a81\x2da9ff\x2d79904f9465e0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv5q6f.mount: Deactivated successfully. Apr 17 23:47:09.014499 systemd[1]: var-lib-kubelet-pods-43b4c361\x2d5e57\x2d4a81\x2da9ff\x2d79904f9465e0-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 17 23:47:09.104405 kubelet[3225]: I0417 23:47:09.104360 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/b0f8d9bc-898b-4380-a34f-a003afaabd59-nginx-config\") pod \"whisker-85cd49d475-2mdcj\" (UID: \"b0f8d9bc-898b-4380-a34f-a003afaabd59\") " pod="calico-system/whisker-85cd49d475-2mdcj" Apr 17 23:47:09.104405 kubelet[3225]: I0417 23:47:09.104410 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0f8d9bc-898b-4380-a34f-a003afaabd59-whisker-ca-bundle\") pod \"whisker-85cd49d475-2mdcj\" (UID: \"b0f8d9bc-898b-4380-a34f-a003afaabd59\") " pod="calico-system/whisker-85cd49d475-2mdcj" Apr 17 23:47:09.104959 kubelet[3225]: I0417 23:47:09.104435 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqw5l\" (UniqueName: \"kubernetes.io/projected/b0f8d9bc-898b-4380-a34f-a003afaabd59-kube-api-access-dqw5l\") pod \"whisker-85cd49d475-2mdcj\" (UID: \"b0f8d9bc-898b-4380-a34f-a003afaabd59\") " pod="calico-system/whisker-85cd49d475-2mdcj" Apr 17 23:47:09.104959 kubelet[3225]: I0417 23:47:09.104470 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b0f8d9bc-898b-4380-a34f-a003afaabd59-whisker-backend-key-pair\") pod \"whisker-85cd49d475-2mdcj\" (UID: \"b0f8d9bc-898b-4380-a34f-a003afaabd59\") " pod="calico-system/whisker-85cd49d475-2mdcj" Apr 17 23:47:09.287963 containerd[1709]: time="2026-04-17T23:47:09.287379780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-85cd49d475-2mdcj,Uid:b0f8d9bc-898b-4380-a34f-a003afaabd59,Namespace:calico-system,Attempt:0,}" Apr 17 23:47:09.334109 kubelet[3225]: I0417 23:47:09.334063 3225 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43b4c361-5e57-4a81-a9ff-79904f9465e0" path="/var/lib/kubelet/pods/43b4c361-5e57-4a81-a9ff-79904f9465e0/volumes" Apr 17 23:47:09.516758 kernel: calico-node[4807]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 17 23:47:09.533779 systemd-networkd[1329]: cali030718876ac: Link UP Apr 17 23:47:09.535214 systemd-networkd[1329]: cali030718876ac: Gained carrier Apr 17 23:47:09.595498 containerd[1709]: 2026-04-17 23:47:09.422 [ERROR][4909] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:47:09.595498 containerd[1709]: 2026-04-17 23:47:09.438 [INFO][4909] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--7251cc3c8a-k8s-whisker--85cd49d475--2mdcj-eth0 whisker-85cd49d475- calico-system b0f8d9bc-898b-4380-a34f-a003afaabd59 990 0 2026-04-17 23:47:08 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:85cd49d475 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-7251cc3c8a whisker-85cd49d475-2mdcj eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali030718876ac [] [] }} ContainerID="90ce2406bfff9af824abdef3313c7e109a5c61833ea3d0c2c0ee021fc79ab4ca" Namespace="calico-system" Pod="whisker-85cd49d475-2mdcj" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-whisker--85cd49d475--2mdcj-" Apr 17 23:47:09.595498 containerd[1709]: 2026-04-17 23:47:09.438 [INFO][4909] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="90ce2406bfff9af824abdef3313c7e109a5c61833ea3d0c2c0ee021fc79ab4ca" Namespace="calico-system" Pod="whisker-85cd49d475-2mdcj" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-whisker--85cd49d475--2mdcj-eth0" Apr 17 23:47:09.595498 containerd[1709]: 2026-04-17 23:47:09.478 [INFO][4925] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="90ce2406bfff9af824abdef3313c7e109a5c61833ea3d0c2c0ee021fc79ab4ca" HandleID="k8s-pod-network.90ce2406bfff9af824abdef3313c7e109a5c61833ea3d0c2c0ee021fc79ab4ca" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-whisker--85cd49d475--2mdcj-eth0" Apr 17 23:47:09.595498 containerd[1709]: 2026-04-17 23:47:09.485 [INFO][4925] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="90ce2406bfff9af824abdef3313c7e109a5c61833ea3d0c2c0ee021fc79ab4ca" HandleID="k8s-pod-network.90ce2406bfff9af824abdef3313c7e109a5c61833ea3d0c2c0ee021fc79ab4ca" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-whisker--85cd49d475--2mdcj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002772f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-7251cc3c8a", "pod":"whisker-85cd49d475-2mdcj", "timestamp":"2026-04-17 23:47:09.478992572 +0000 UTC"}, Hostname:"ci-4081.3.6-n-7251cc3c8a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001dedc0)} Apr 17 23:47:09.595498 containerd[1709]: 2026-04-17 23:47:09.485 [INFO][4925] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:47:09.595498 containerd[1709]: 2026-04-17 23:47:09.485 [INFO][4925] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:47:09.595498 containerd[1709]: 2026-04-17 23:47:09.485 [INFO][4925] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-7251cc3c8a' Apr 17 23:47:09.595498 containerd[1709]: 2026-04-17 23:47:09.488 [INFO][4925] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.90ce2406bfff9af824abdef3313c7e109a5c61833ea3d0c2c0ee021fc79ab4ca" host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:09.595498 containerd[1709]: 2026-04-17 23:47:09.493 [INFO][4925] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:09.595498 containerd[1709]: 2026-04-17 23:47:09.498 [INFO][4925] ipam/ipam.go 526: Trying affinity for 192.168.29.128/26 host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:09.595498 containerd[1709]: 2026-04-17 23:47:09.501 [INFO][4925] ipam/ipam.go 160: Attempting to load block cidr=192.168.29.128/26 host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:09.595498 containerd[1709]: 2026-04-17 23:47:09.504 [INFO][4925] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.29.128/26 host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:09.595498 containerd[1709]: 2026-04-17 23:47:09.504 [INFO][4925] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.29.128/26 handle="k8s-pod-network.90ce2406bfff9af824abdef3313c7e109a5c61833ea3d0c2c0ee021fc79ab4ca" host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:09.595498 containerd[1709]: 2026-04-17 23:47:09.506 [INFO][4925] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.90ce2406bfff9af824abdef3313c7e109a5c61833ea3d0c2c0ee021fc79ab4ca Apr 17 23:47:09.595498 containerd[1709]: 2026-04-17 23:47:09.511 [INFO][4925] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.29.128/26 handle="k8s-pod-network.90ce2406bfff9af824abdef3313c7e109a5c61833ea3d0c2c0ee021fc79ab4ca" host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:09.595498 containerd[1709]: 2026-04-17 23:47:09.520 [INFO][4925] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.29.133/26] block=192.168.29.128/26 handle="k8s-pod-network.90ce2406bfff9af824abdef3313c7e109a5c61833ea3d0c2c0ee021fc79ab4ca" host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:09.595498 containerd[1709]: 2026-04-17 23:47:09.520 [INFO][4925] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.29.133/26] handle="k8s-pod-network.90ce2406bfff9af824abdef3313c7e109a5c61833ea3d0c2c0ee021fc79ab4ca" host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:09.595498 containerd[1709]: 2026-04-17 23:47:09.520 [INFO][4925] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:47:09.595498 containerd[1709]: 2026-04-17 23:47:09.520 [INFO][4925] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.29.133/26] IPv6=[] ContainerID="90ce2406bfff9af824abdef3313c7e109a5c61833ea3d0c2c0ee021fc79ab4ca" HandleID="k8s-pod-network.90ce2406bfff9af824abdef3313c7e109a5c61833ea3d0c2c0ee021fc79ab4ca" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-whisker--85cd49d475--2mdcj-eth0" Apr 17 23:47:09.599341 containerd[1709]: 2026-04-17 23:47:09.525 [INFO][4909] cni-plugin/k8s.go 418: Populated endpoint ContainerID="90ce2406bfff9af824abdef3313c7e109a5c61833ea3d0c2c0ee021fc79ab4ca" Namespace="calico-system" Pod="whisker-85cd49d475-2mdcj" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-whisker--85cd49d475--2mdcj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--7251cc3c8a-k8s-whisker--85cd49d475--2mdcj-eth0", GenerateName:"whisker-85cd49d475-", Namespace:"calico-system", SelfLink:"", UID:"b0f8d9bc-898b-4380-a34f-a003afaabd59", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 47, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"85cd49d475", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-7251cc3c8a", ContainerID:"", Pod:"whisker-85cd49d475-2mdcj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.29.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali030718876ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:47:09.599341 containerd[1709]: 2026-04-17 23:47:09.525 [INFO][4909] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.133/32] ContainerID="90ce2406bfff9af824abdef3313c7e109a5c61833ea3d0c2c0ee021fc79ab4ca" Namespace="calico-system" Pod="whisker-85cd49d475-2mdcj" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-whisker--85cd49d475--2mdcj-eth0" Apr 17 23:47:09.599341 containerd[1709]: 2026-04-17 23:47:09.525 [INFO][4909] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali030718876ac ContainerID="90ce2406bfff9af824abdef3313c7e109a5c61833ea3d0c2c0ee021fc79ab4ca" Namespace="calico-system" Pod="whisker-85cd49d475-2mdcj" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-whisker--85cd49d475--2mdcj-eth0" Apr 17 23:47:09.599341 containerd[1709]: 2026-04-17 23:47:09.535 [INFO][4909] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="90ce2406bfff9af824abdef3313c7e109a5c61833ea3d0c2c0ee021fc79ab4ca" Namespace="calico-system" Pod="whisker-85cd49d475-2mdcj" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-whisker--85cd49d475--2mdcj-eth0" Apr 17 23:47:09.599341 containerd[1709]: 2026-04-17 23:47:09.536 [INFO][4909] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="90ce2406bfff9af824abdef3313c7e109a5c61833ea3d0c2c0ee021fc79ab4ca" Namespace="calico-system" Pod="whisker-85cd49d475-2mdcj" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-whisker--85cd49d475--2mdcj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--7251cc3c8a-k8s-whisker--85cd49d475--2mdcj-eth0", GenerateName:"whisker-85cd49d475-", Namespace:"calico-system", SelfLink:"", UID:"b0f8d9bc-898b-4380-a34f-a003afaabd59", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 47, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"85cd49d475", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-7251cc3c8a", ContainerID:"90ce2406bfff9af824abdef3313c7e109a5c61833ea3d0c2c0ee021fc79ab4ca", Pod:"whisker-85cd49d475-2mdcj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.29.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali030718876ac", MAC:"b6:98:2f:7d:23:03", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:47:09.599341 containerd[1709]: 2026-04-17 23:47:09.590 [INFO][4909] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="90ce2406bfff9af824abdef3313c7e109a5c61833ea3d0c2c0ee021fc79ab4ca" Namespace="calico-system" Pod="whisker-85cd49d475-2mdcj" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-whisker--85cd49d475--2mdcj-eth0" Apr 17 23:47:09.753582 containerd[1709]: time="2026-04-17T23:47:09.752905135Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:47:09.753582 containerd[1709]: time="2026-04-17T23:47:09.752978835Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:47:09.753582 containerd[1709]: time="2026-04-17T23:47:09.753027136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:47:09.753582 containerd[1709]: time="2026-04-17T23:47:09.753171138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:47:09.761790 systemd-networkd[1329]: calic0915fbd163: Gained IPv6LL Apr 17 23:47:09.786939 systemd[1]: Started cri-containerd-90ce2406bfff9af824abdef3313c7e109a5c61833ea3d0c2c0ee021fc79ab4ca.scope - libcontainer container 90ce2406bfff9af824abdef3313c7e109a5c61833ea3d0c2c0ee021fc79ab4ca. Apr 17 23:47:09.857465 containerd[1709]: time="2026-04-17T23:47:09.857319292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-85cd49d475-2mdcj,Uid:b0f8d9bc-898b-4380-a34f-a003afaabd59,Namespace:calico-system,Attempt:0,} returns sandbox id \"90ce2406bfff9af824abdef3313c7e109a5c61833ea3d0c2c0ee021fc79ab4ca\"" Apr 17 23:47:10.144894 systemd-networkd[1329]: cali7b4a9b097b1: Gained IPv6LL Apr 17 23:47:10.273210 systemd-networkd[1329]: calia5f84671acc: Gained IPv6LL Apr 17 23:47:10.464855 systemd-networkd[1329]: cali684756efbe2: Gained IPv6LL Apr 17 23:47:10.516525 systemd-networkd[1329]: vxlan.calico: Link UP Apr 17 23:47:10.516562 systemd-networkd[1329]: vxlan.calico: Gained carrier Apr 17 23:47:11.169003 systemd-networkd[1329]: cali030718876ac: Gained IPv6LL Apr 17 23:47:12.000914 systemd-networkd[1329]: vxlan.calico: Gained IPv6LL Apr 17 23:47:13.305807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount78603173.mount: Deactivated successfully. Apr 17 23:47:16.381647 containerd[1709]: time="2026-04-17T23:47:16.381585811Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:47:16.427843 containerd[1709]: time="2026-04-17T23:47:16.427761749Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 17 23:47:16.431947 containerd[1709]: time="2026-04-17T23:47:16.431876806Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:47:16.492391 containerd[1709]: time="2026-04-17T23:47:16.492293241Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:47:16.493271 containerd[1709]: time="2026-04-17T23:47:16.493227054Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 7.897822573s" Apr 17 23:47:16.493533 containerd[1709]: time="2026-04-17T23:47:16.493277855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 17 23:47:16.495083 containerd[1709]: time="2026-04-17T23:47:16.494749075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 17 23:47:16.546082 containerd[1709]: time="2026-04-17T23:47:16.546033484Z" level=info msg="CreateContainer within sandbox \"86f86b092192c99db0ff99a99a8c04e4d97353a905ca0cc143d7e4835c0a07ef\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 17 23:47:16.892044 containerd[1709]: time="2026-04-17T23:47:16.891996267Z" level=info msg="CreateContainer within sandbox \"86f86b092192c99db0ff99a99a8c04e4d97353a905ca0cc143d7e4835c0a07ef\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"22d25d5ce8d4dcecfc0b0fffe99df5492443d45c43d1396f5b9f8c6ccc8861d1\"" Apr 17 23:47:16.893121 containerd[1709]: time="2026-04-17T23:47:16.892747578Z" level=info msg="StartContainer for \"22d25d5ce8d4dcecfc0b0fffe99df5492443d45c43d1396f5b9f8c6ccc8861d1\"" Apr 17 23:47:16.929874 systemd[1]: Started cri-containerd-22d25d5ce8d4dcecfc0b0fffe99df5492443d45c43d1396f5b9f8c6ccc8861d1.scope - libcontainer container 22d25d5ce8d4dcecfc0b0fffe99df5492443d45c43d1396f5b9f8c6ccc8861d1. Apr 17 23:47:16.987072 containerd[1709]: time="2026-04-17T23:47:16.986910679Z" level=info msg="StartContainer for \"22d25d5ce8d4dcecfc0b0fffe99df5492443d45c43d1396f5b9f8c6ccc8861d1\" returns successfully" Apr 17 23:47:17.982661 kubelet[3225]: I0417 23:47:17.982594 3225 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-ql4z8" podStartSLOduration=51.082125137 podStartE2EDuration="58.982571545s" podCreationTimestamp="2026-04-17 23:46:19 +0000 UTC" firstStartedPulling="2026-04-17 23:47:08.594103964 +0000 UTC m=+71.440817171" lastFinishedPulling="2026-04-17 23:47:16.494550272 +0000 UTC m=+79.341263579" observedRunningTime="2026-04-17 23:47:17.982455343 +0000 UTC m=+80.829168650" watchObservedRunningTime="2026-04-17 23:47:17.982571545 +0000 UTC m=+80.829284752" Apr 17 23:47:18.338034 containerd[1709]: time="2026-04-17T23:47:18.336186433Z" level=info msg="StopPodSandbox for \"445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb\"" Apr 17 23:47:18.448285 containerd[1709]: 2026-04-17 23:47:18.387 [INFO][5182] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb" Apr 17 23:47:18.448285 containerd[1709]: 2026-04-17 23:47:18.389 [INFO][5182] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb" iface="eth0" netns="/var/run/netns/cni-35999f4f-073b-781b-1ca5-4db5ec96fbe9" Apr 17 23:47:18.448285 containerd[1709]: 2026-04-17 23:47:18.391 [INFO][5182] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb" iface="eth0" netns="/var/run/netns/cni-35999f4f-073b-781b-1ca5-4db5ec96fbe9" Apr 17 23:47:18.448285 containerd[1709]: 2026-04-17 23:47:18.392 [INFO][5182] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb" iface="eth0" netns="/var/run/netns/cni-35999f4f-073b-781b-1ca5-4db5ec96fbe9" Apr 17 23:47:18.448285 containerd[1709]: 2026-04-17 23:47:18.392 [INFO][5182] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb" Apr 17 23:47:18.448285 containerd[1709]: 2026-04-17 23:47:18.392 [INFO][5182] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb" Apr 17 23:47:18.448285 containerd[1709]: 2026-04-17 23:47:18.433 [INFO][5189] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb" HandleID="k8s-pod-network.445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--8dtqd-eth0" Apr 17 23:47:18.448285 containerd[1709]: 2026-04-17 23:47:18.433 [INFO][5189] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:47:18.448285 containerd[1709]: 2026-04-17 23:47:18.433 [INFO][5189] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:47:18.448285 containerd[1709]: 2026-04-17 23:47:18.442 [WARNING][5189] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb" HandleID="k8s-pod-network.445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--8dtqd-eth0" Apr 17 23:47:18.448285 containerd[1709]: 2026-04-17 23:47:18.442 [INFO][5189] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb" HandleID="k8s-pod-network.445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--8dtqd-eth0" Apr 17 23:47:18.448285 containerd[1709]: 2026-04-17 23:47:18.445 [INFO][5189] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:47:18.448285 containerd[1709]: 2026-04-17 23:47:18.447 [INFO][5182] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb" Apr 17 23:47:18.448919 containerd[1709]: time="2026-04-17T23:47:18.448880291Z" level=info msg="TearDown network for sandbox \"445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb\" successfully" Apr 17 23:47:18.449046 containerd[1709]: time="2026-04-17T23:47:18.449027194Z" level=info msg="StopPodSandbox for \"445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb\" returns successfully" Apr 17 23:47:18.453485 systemd[1]: run-netns-cni\x2d35999f4f\x2d073b\x2d781b\x2d1ca5\x2d4db5ec96fbe9.mount: Deactivated successfully. Apr 17 23:47:18.455496 containerd[1709]: time="2026-04-17T23:47:18.455459382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8dtqd,Uid:a634795c-d355-406a-b830-50fc4384862e,Namespace:kube-system,Attempt:1,}" Apr 17 23:47:18.610621 systemd-networkd[1329]: cali139a73999ba: Link UP Apr 17 23:47:18.616742 systemd-networkd[1329]: cali139a73999ba: Gained carrier Apr 17 23:47:18.669322 containerd[1709]: 2026-04-17 23:47:18.535 [INFO][5195] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--8dtqd-eth0 coredns-66bc5c9577- kube-system a634795c-d355-406a-b830-50fc4384862e 1020 0 2026-04-17 23:46:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-7251cc3c8a coredns-66bc5c9577-8dtqd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali139a73999ba [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="b4d6dd8f986429bce97a4b4a5edf1cba9985313fd15eebcc764f597822f81c04" Namespace="kube-system" Pod="coredns-66bc5c9577-8dtqd" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--8dtqd-" Apr 17 23:47:18.669322 containerd[1709]: 2026-04-17 23:47:18.536 [INFO][5195] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b4d6dd8f986429bce97a4b4a5edf1cba9985313fd15eebcc764f597822f81c04" Namespace="kube-system" Pod="coredns-66bc5c9577-8dtqd" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--8dtqd-eth0" Apr 17 23:47:18.669322 containerd[1709]: 2026-04-17 23:47:18.566 [INFO][5208] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b4d6dd8f986429bce97a4b4a5edf1cba9985313fd15eebcc764f597822f81c04" HandleID="k8s-pod-network.b4d6dd8f986429bce97a4b4a5edf1cba9985313fd15eebcc764f597822f81c04" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--8dtqd-eth0" Apr 17 23:47:18.669322 containerd[1709]: 2026-04-17 23:47:18.575 [INFO][5208] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b4d6dd8f986429bce97a4b4a5edf1cba9985313fd15eebcc764f597822f81c04" HandleID="k8s-pod-network.b4d6dd8f986429bce97a4b4a5edf1cba9985313fd15eebcc764f597822f81c04" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--8dtqd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277440), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-7251cc3c8a", "pod":"coredns-66bc5c9577-8dtqd", "timestamp":"2026-04-17 23:47:18.566195513 +0000 UTC"}, Hostname:"ci-4081.3.6-n-7251cc3c8a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001f8dc0)} Apr 17 23:47:18.669322 containerd[1709]: 2026-04-17 23:47:18.575 [INFO][5208] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:47:18.669322 containerd[1709]: 2026-04-17 23:47:18.575 [INFO][5208] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:47:18.669322 containerd[1709]: 2026-04-17 23:47:18.576 [INFO][5208] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-7251cc3c8a' Apr 17 23:47:18.669322 containerd[1709]: 2026-04-17 23:47:18.578 [INFO][5208] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b4d6dd8f986429bce97a4b4a5edf1cba9985313fd15eebcc764f597822f81c04" host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:18.669322 containerd[1709]: 2026-04-17 23:47:18.582 [INFO][5208] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:18.669322 containerd[1709]: 2026-04-17 23:47:18.587 [INFO][5208] ipam/ipam.go 526: Trying affinity for 192.168.29.128/26 host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:18.669322 containerd[1709]: 2026-04-17 23:47:18.589 [INFO][5208] ipam/ipam.go 160: Attempting to load block cidr=192.168.29.128/26 host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:18.669322 containerd[1709]: 2026-04-17 23:47:18.591 [INFO][5208] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.29.128/26 host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:18.669322 containerd[1709]: 2026-04-17 23:47:18.591 [INFO][5208] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.29.128/26 handle="k8s-pod-network.b4d6dd8f986429bce97a4b4a5edf1cba9985313fd15eebcc764f597822f81c04" host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:18.669322 containerd[1709]: 2026-04-17 23:47:18.592 [INFO][5208] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b4d6dd8f986429bce97a4b4a5edf1cba9985313fd15eebcc764f597822f81c04 Apr 17 23:47:18.669322 containerd[1709]: 2026-04-17 23:47:18.597 [INFO][5208] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.29.128/26 handle="k8s-pod-network.b4d6dd8f986429bce97a4b4a5edf1cba9985313fd15eebcc764f597822f81c04" host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:18.669322 containerd[1709]: 2026-04-17 23:47:18.605 [INFO][5208] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.29.134/26] block=192.168.29.128/26 handle="k8s-pod-network.b4d6dd8f986429bce97a4b4a5edf1cba9985313fd15eebcc764f597822f81c04" host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:18.669322 containerd[1709]: 2026-04-17 23:47:18.606 [INFO][5208] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.29.134/26] handle="k8s-pod-network.b4d6dd8f986429bce97a4b4a5edf1cba9985313fd15eebcc764f597822f81c04" host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:18.669322 containerd[1709]: 2026-04-17 23:47:18.606 [INFO][5208] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:47:18.669322 containerd[1709]: 2026-04-17 23:47:18.606 [INFO][5208] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.29.134/26] IPv6=[] ContainerID="b4d6dd8f986429bce97a4b4a5edf1cba9985313fd15eebcc764f597822f81c04" HandleID="k8s-pod-network.b4d6dd8f986429bce97a4b4a5edf1cba9985313fd15eebcc764f597822f81c04" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--8dtqd-eth0" Apr 17 23:47:18.670588 containerd[1709]: 2026-04-17 23:47:18.607 [INFO][5195] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b4d6dd8f986429bce97a4b4a5edf1cba9985313fd15eebcc764f597822f81c04" Namespace="kube-system" Pod="coredns-66bc5c9577-8dtqd" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--8dtqd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--8dtqd-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a634795c-d355-406a-b830-50fc4384862e", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-7251cc3c8a", ContainerID:"", Pod:"coredns-66bc5c9577-8dtqd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali139a73999ba", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:47:18.670588 containerd[1709]: 2026-04-17 23:47:18.608 [INFO][5195] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.134/32] ContainerID="b4d6dd8f986429bce97a4b4a5edf1cba9985313fd15eebcc764f597822f81c04" Namespace="kube-system" Pod="coredns-66bc5c9577-8dtqd" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--8dtqd-eth0" Apr 17 23:47:18.670588 containerd[1709]: 2026-04-17 23:47:18.608 [INFO][5195] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali139a73999ba ContainerID="b4d6dd8f986429bce97a4b4a5edf1cba9985313fd15eebcc764f597822f81c04" Namespace="kube-system" Pod="coredns-66bc5c9577-8dtqd" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--8dtqd-eth0" Apr 17 23:47:18.670588 containerd[1709]: 2026-04-17 23:47:18.610 [INFO][5195] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b4d6dd8f986429bce97a4b4a5edf1cba9985313fd15eebcc764f597822f81c04" Namespace="kube-system" Pod="coredns-66bc5c9577-8dtqd" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--8dtqd-eth0" Apr 17 23:47:18.670588 containerd[1709]: 2026-04-17 23:47:18.611 [INFO][5195] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b4d6dd8f986429bce97a4b4a5edf1cba9985313fd15eebcc764f597822f81c04" Namespace="kube-system" Pod="coredns-66bc5c9577-8dtqd" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--8dtqd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--8dtqd-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a634795c-d355-406a-b830-50fc4384862e", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-7251cc3c8a", ContainerID:"b4d6dd8f986429bce97a4b4a5edf1cba9985313fd15eebcc764f597822f81c04", Pod:"coredns-66bc5c9577-8dtqd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali139a73999ba", MAC:"f2:70:3d:21:68:18", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:47:18.671009 containerd[1709]: 2026-04-17 23:47:18.633 [INFO][5195] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b4d6dd8f986429bce97a4b4a5edf1cba9985313fd15eebcc764f597822f81c04" Namespace="kube-system" Pod="coredns-66bc5c9577-8dtqd" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--8dtqd-eth0" Apr 17 23:47:18.722836 containerd[1709]: time="2026-04-17T23:47:18.722354672Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:47:18.722836 containerd[1709]: time="2026-04-17T23:47:18.722701277Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:47:18.722836 containerd[1709]: time="2026-04-17T23:47:18.722829379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:47:18.723128 containerd[1709]: time="2026-04-17T23:47:18.722934480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:47:18.763683 systemd[1]: Started cri-containerd-b4d6dd8f986429bce97a4b4a5edf1cba9985313fd15eebcc764f597822f81c04.scope - libcontainer container b4d6dd8f986429bce97a4b4a5edf1cba9985313fd15eebcc764f597822f81c04. Apr 17 23:47:18.825384 containerd[1709]: time="2026-04-17T23:47:18.825333796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8dtqd,Uid:a634795c-d355-406a-b830-50fc4384862e,Namespace:kube-system,Attempt:1,} returns sandbox id \"b4d6dd8f986429bce97a4b4a5edf1cba9985313fd15eebcc764f597822f81c04\"" Apr 17 23:47:19.937729 systemd-networkd[1329]: cali139a73999ba: Gained IPv6LL Apr 17 23:47:20.392737 containerd[1709]: time="2026-04-17T23:47:20.391565150Z" level=info msg="CreateContainer within sandbox \"b4d6dd8f986429bce97a4b4a5edf1cba9985313fd15eebcc764f597822f81c04\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:47:20.467052 containerd[1709]: time="2026-04-17T23:47:20.466998293Z" level=info msg="CreateContainer within sandbox \"b4d6dd8f986429bce97a4b4a5edf1cba9985313fd15eebcc764f597822f81c04\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2347f3e6e071fb2d8716fad21b4a00c354f0fad30acf121c009760baa35fec48\"" Apr 17 23:47:20.468670 containerd[1709]: time="2026-04-17T23:47:20.468148808Z" level=info msg="StartContainer for \"2347f3e6e071fb2d8716fad21b4a00c354f0fad30acf121c009760baa35fec48\"" Apr 17 23:47:20.550997 systemd[1]: Started cri-containerd-2347f3e6e071fb2d8716fad21b4a00c354f0fad30acf121c009760baa35fec48.scope - libcontainer container 2347f3e6e071fb2d8716fad21b4a00c354f0fad30acf121c009760baa35fec48. Apr 17 23:47:20.599761 containerd[1709]: time="2026-04-17T23:47:20.598808515Z" level=info msg="StartContainer for \"2347f3e6e071fb2d8716fad21b4a00c354f0fad30acf121c009760baa35fec48\" returns successfully" Apr 17 23:47:20.702783 kubelet[3225]: I0417 23:47:20.702484 3225 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-8dtqd" podStartSLOduration=76.702463248 podStartE2EDuration="1m16.702463248s" podCreationTimestamp="2026-04-17 23:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:47:20.677461202 +0000 UTC m=+83.524174409" watchObservedRunningTime="2026-04-17 23:47:20.702463248 +0000 UTC m=+83.549176555" Apr 17 23:47:21.331559 containerd[1709]: time="2026-04-17T23:47:21.331201609Z" level=info msg="StopPodSandbox for \"85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476\"" Apr 17 23:47:21.331559 containerd[1709]: time="2026-04-17T23:47:21.331253210Z" level=info msg="StopPodSandbox for \"b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892\"" Apr 17 23:47:21.518423 containerd[1709]: 2026-04-17 23:47:21.454 [INFO][5406] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476" Apr 17 23:47:21.518423 containerd[1709]: 2026-04-17 23:47:21.454 [INFO][5406] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476" iface="eth0" netns="/var/run/netns/cni-d0f69ed0-d539-a0c3-bab8-195783027366" Apr 17 23:47:21.518423 containerd[1709]: 2026-04-17 23:47:21.455 [INFO][5406] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476" iface="eth0" netns="/var/run/netns/cni-d0f69ed0-d539-a0c3-bab8-195783027366" Apr 17 23:47:21.518423 containerd[1709]: 2026-04-17 23:47:21.455 [INFO][5406] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476" iface="eth0" netns="/var/run/netns/cni-d0f69ed0-d539-a0c3-bab8-195783027366" Apr 17 23:47:21.518423 containerd[1709]: 2026-04-17 23:47:21.455 [INFO][5406] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476" Apr 17 23:47:21.518423 containerd[1709]: 2026-04-17 23:47:21.455 [INFO][5406] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476" Apr 17 23:47:21.518423 containerd[1709]: 2026-04-17 23:47:21.496 [INFO][5418] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476" HandleID="k8s-pod-network.85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-csi--node--driver--vtqds-eth0" Apr 17 23:47:21.518423 containerd[1709]: 2026-04-17 23:47:21.496 [INFO][5418] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:47:21.518423 containerd[1709]: 2026-04-17 23:47:21.496 [INFO][5418] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:47:21.518423 containerd[1709]: 2026-04-17 23:47:21.506 [WARNING][5418] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476" HandleID="k8s-pod-network.85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-csi--node--driver--vtqds-eth0" Apr 17 23:47:21.518423 containerd[1709]: 2026-04-17 23:47:21.506 [INFO][5418] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476" HandleID="k8s-pod-network.85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-csi--node--driver--vtqds-eth0" Apr 17 23:47:21.518423 containerd[1709]: 2026-04-17 23:47:21.511 [INFO][5418] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:47:21.518423 containerd[1709]: 2026-04-17 23:47:21.514 [INFO][5406] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476" Apr 17 23:47:21.523113 containerd[1709]: time="2026-04-17T23:47:21.521508526Z" level=info msg="TearDown network for sandbox \"85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476\" successfully" Apr 17 23:47:21.523113 containerd[1709]: time="2026-04-17T23:47:21.521547127Z" level=info msg="StopPodSandbox for \"85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476\" returns successfully" Apr 17 23:47:21.524501 systemd[1]: run-netns-cni\x2dd0f69ed0\x2dd539\x2da0c3\x2dbab8\x2d195783027366.mount: Deactivated successfully. Apr 17 23:47:21.529546 containerd[1709]: time="2026-04-17T23:47:21.529509836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vtqds,Uid:d55082e2-e0fa-4118-b796-695fc5437662,Namespace:calico-system,Attempt:1,}" Apr 17 23:47:21.562777 containerd[1709]: 2026-04-17 23:47:21.448 [INFO][5399] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892" Apr 17 23:47:21.562777 containerd[1709]: 2026-04-17 23:47:21.448 [INFO][5399] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892" iface="eth0" netns="/var/run/netns/cni-973175af-acdd-f2f4-0b97-dbe757180b90" Apr 17 23:47:21.562777 containerd[1709]: 2026-04-17 23:47:21.449 [INFO][5399] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892" iface="eth0" netns="/var/run/netns/cni-973175af-acdd-f2f4-0b97-dbe757180b90" Apr 17 23:47:21.562777 containerd[1709]: 2026-04-17 23:47:21.452 [INFO][5399] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892" iface="eth0" netns="/var/run/netns/cni-973175af-acdd-f2f4-0b97-dbe757180b90" Apr 17 23:47:21.562777 containerd[1709]: 2026-04-17 23:47:21.452 [INFO][5399] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892" Apr 17 23:47:21.562777 containerd[1709]: 2026-04-17 23:47:21.452 [INFO][5399] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892" Apr 17 23:47:21.562777 containerd[1709]: 2026-04-17 23:47:21.535 [INFO][5416] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892" HandleID="k8s-pod-network.b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--6rrjw-eth0" Apr 17 23:47:21.562777 containerd[1709]: 2026-04-17 23:47:21.535 [INFO][5416] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:47:21.562777 containerd[1709]: 2026-04-17 23:47:21.535 [INFO][5416] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:47:21.562777 containerd[1709]: 2026-04-17 23:47:21.549 [WARNING][5416] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892" HandleID="k8s-pod-network.b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--6rrjw-eth0" Apr 17 23:47:21.562777 containerd[1709]: 2026-04-17 23:47:21.555 [INFO][5416] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892" HandleID="k8s-pod-network.b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--6rrjw-eth0" Apr 17 23:47:21.562777 containerd[1709]: 2026-04-17 23:47:21.558 [INFO][5416] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:47:21.562777 containerd[1709]: 2026-04-17 23:47:21.560 [INFO][5399] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892" Apr 17 23:47:21.565613 containerd[1709]: time="2026-04-17T23:47:21.563169599Z" level=info msg="TearDown network for sandbox \"b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892\" successfully" Apr 17 23:47:21.565613 containerd[1709]: time="2026-04-17T23:47:21.563214800Z" level=info msg="StopPodSandbox for \"b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892\" returns successfully" Apr 17 23:47:21.568656 systemd[1]: run-netns-cni\x2d973175af\x2dacdd\x2df2f4\x2d0b97\x2ddbe757180b90.mount: Deactivated successfully. Apr 17 23:47:21.574078 containerd[1709]: time="2026-04-17T23:47:21.573895247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6rrjw,Uid:099fd00a-16a5-4661-ac32-2536e3c7653c,Namespace:kube-system,Attempt:1,}" Apr 17 23:47:21.804330 systemd-networkd[1329]: calia86cfa6defa: Link UP Apr 17 23:47:21.805988 systemd-networkd[1329]: calia86cfa6defa: Gained carrier Apr 17 23:47:21.846895 containerd[1709]: 2026-04-17 23:47:21.642 [INFO][5430] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--7251cc3c8a-k8s-csi--node--driver--vtqds-eth0 csi-node-driver- calico-system d55082e2-e0fa-4118-b796-695fc5437662 1047 0 2026-04-17 23:46:19 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-7251cc3c8a csi-node-driver-vtqds eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia86cfa6defa [] [] }} ContainerID="090c1ddd8631fa9a94d854bb5725bca35a7a6a38b426e5a6a20fcb7bb77d87b9" Namespace="calico-system" Pod="csi-node-driver-vtqds" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-csi--node--driver--vtqds-" Apr 17 23:47:21.846895 containerd[1709]: 2026-04-17 23:47:21.643 [INFO][5430] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="090c1ddd8631fa9a94d854bb5725bca35a7a6a38b426e5a6a20fcb7bb77d87b9" Namespace="calico-system" Pod="csi-node-driver-vtqds" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-csi--node--driver--vtqds-eth0" Apr 17 23:47:21.846895 containerd[1709]: 2026-04-17 23:47:21.708 [INFO][5451] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="090c1ddd8631fa9a94d854bb5725bca35a7a6a38b426e5a6a20fcb7bb77d87b9" HandleID="k8s-pod-network.090c1ddd8631fa9a94d854bb5725bca35a7a6a38b426e5a6a20fcb7bb77d87b9" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-csi--node--driver--vtqds-eth0" Apr 17 23:47:21.846895 containerd[1709]: 2026-04-17 23:47:21.718 [INFO][5451] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="090c1ddd8631fa9a94d854bb5725bca35a7a6a38b426e5a6a20fcb7bb77d87b9" HandleID="k8s-pod-network.090c1ddd8631fa9a94d854bb5725bca35a7a6a38b426e5a6a20fcb7bb77d87b9" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-csi--node--driver--vtqds-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fec0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-7251cc3c8a", "pod":"csi-node-driver-vtqds", "timestamp":"2026-04-17 23:47:21.708757002 +0000 UTC"}, Hostname:"ci-4081.3.6-n-7251cc3c8a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000e8f20)} Apr 17 23:47:21.846895 containerd[1709]: 2026-04-17 23:47:21.718 [INFO][5451] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:47:21.846895 containerd[1709]: 2026-04-17 23:47:21.719 [INFO][5451] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:47:21.846895 containerd[1709]: 2026-04-17 23:47:21.719 [INFO][5451] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-7251cc3c8a' Apr 17 23:47:21.846895 containerd[1709]: 2026-04-17 23:47:21.723 [INFO][5451] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.090c1ddd8631fa9a94d854bb5725bca35a7a6a38b426e5a6a20fcb7bb77d87b9" host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:21.846895 containerd[1709]: 2026-04-17 23:47:21.730 [INFO][5451] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:21.846895 containerd[1709]: 2026-04-17 23:47:21.740 [INFO][5451] ipam/ipam.go 526: Trying affinity for 192.168.29.128/26 host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:21.846895 containerd[1709]: 2026-04-17 23:47:21.743 [INFO][5451] ipam/ipam.go 160: Attempting to load block cidr=192.168.29.128/26 host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:21.846895 containerd[1709]: 2026-04-17 23:47:21.746 [INFO][5451] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.29.128/26 host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:21.846895 containerd[1709]: 2026-04-17 23:47:21.746 [INFO][5451] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.29.128/26 handle="k8s-pod-network.090c1ddd8631fa9a94d854bb5725bca35a7a6a38b426e5a6a20fcb7bb77d87b9" host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:21.846895 containerd[1709]: 2026-04-17 23:47:21.749 [INFO][5451] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.090c1ddd8631fa9a94d854bb5725bca35a7a6a38b426e5a6a20fcb7bb77d87b9 Apr 17 23:47:21.846895 containerd[1709]: 2026-04-17 23:47:21.759 [INFO][5451] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.29.128/26 handle="k8s-pod-network.090c1ddd8631fa9a94d854bb5725bca35a7a6a38b426e5a6a20fcb7bb77d87b9" host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:21.846895 containerd[1709]: 2026-04-17 23:47:21.781 [INFO][5451] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.29.135/26] block=192.168.29.128/26 handle="k8s-pod-network.090c1ddd8631fa9a94d854bb5725bca35a7a6a38b426e5a6a20fcb7bb77d87b9" host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:21.846895 containerd[1709]: 2026-04-17 23:47:21.782 [INFO][5451] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.29.135/26] handle="k8s-pod-network.090c1ddd8631fa9a94d854bb5725bca35a7a6a38b426e5a6a20fcb7bb77d87b9" host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:21.846895 containerd[1709]: 2026-04-17 23:47:21.782 [INFO][5451] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:47:21.846895 containerd[1709]: 2026-04-17 23:47:21.783 [INFO][5451] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.29.135/26] IPv6=[] ContainerID="090c1ddd8631fa9a94d854bb5725bca35a7a6a38b426e5a6a20fcb7bb77d87b9" HandleID="k8s-pod-network.090c1ddd8631fa9a94d854bb5725bca35a7a6a38b426e5a6a20fcb7bb77d87b9" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-csi--node--driver--vtqds-eth0" Apr 17 23:47:21.847974 containerd[1709]: 2026-04-17 23:47:21.790 [INFO][5430] cni-plugin/k8s.go 418: Populated endpoint ContainerID="090c1ddd8631fa9a94d854bb5725bca35a7a6a38b426e5a6a20fcb7bb77d87b9" Namespace="calico-system" Pod="csi-node-driver-vtqds" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-csi--node--driver--vtqds-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--7251cc3c8a-k8s-csi--node--driver--vtqds-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d55082e2-e0fa-4118-b796-695fc5437662", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-7251cc3c8a", ContainerID:"", Pod:"csi-node-driver-vtqds", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.29.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia86cfa6defa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:47:21.847974 containerd[1709]: 2026-04-17 23:47:21.790 [INFO][5430] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.135/32] ContainerID="090c1ddd8631fa9a94d854bb5725bca35a7a6a38b426e5a6a20fcb7bb77d87b9" Namespace="calico-system" Pod="csi-node-driver-vtqds" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-csi--node--driver--vtqds-eth0" Apr 17 23:47:21.847974 containerd[1709]: 2026-04-17 23:47:21.790 [INFO][5430] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia86cfa6defa ContainerID="090c1ddd8631fa9a94d854bb5725bca35a7a6a38b426e5a6a20fcb7bb77d87b9" Namespace="calico-system" Pod="csi-node-driver-vtqds" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-csi--node--driver--vtqds-eth0" Apr 17 23:47:21.847974 containerd[1709]: 2026-04-17 23:47:21.807 [INFO][5430] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="090c1ddd8631fa9a94d854bb5725bca35a7a6a38b426e5a6a20fcb7bb77d87b9" Namespace="calico-system" Pod="csi-node-driver-vtqds" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-csi--node--driver--vtqds-eth0" Apr 17 23:47:21.847974 containerd[1709]: 2026-04-17 23:47:21.810 [INFO][5430] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="090c1ddd8631fa9a94d854bb5725bca35a7a6a38b426e5a6a20fcb7bb77d87b9" Namespace="calico-system" Pod="csi-node-driver-vtqds" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-csi--node--driver--vtqds-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--7251cc3c8a-k8s-csi--node--driver--vtqds-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d55082e2-e0fa-4118-b796-695fc5437662", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-7251cc3c8a", ContainerID:"090c1ddd8631fa9a94d854bb5725bca35a7a6a38b426e5a6a20fcb7bb77d87b9", Pod:"csi-node-driver-vtqds", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.29.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia86cfa6defa", MAC:"4a:a1:a1:76:15:8a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:47:21.847974 containerd[1709]: 2026-04-17 23:47:21.837 [INFO][5430] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="090c1ddd8631fa9a94d854bb5725bca35a7a6a38b426e5a6a20fcb7bb77d87b9" Namespace="calico-system" Pod="csi-node-driver-vtqds" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-csi--node--driver--vtqds-eth0" Apr 17 23:47:21.919670 containerd[1709]: time="2026-04-17T23:47:21.919225796Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:47:21.919670 containerd[1709]: time="2026-04-17T23:47:21.919294197Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:47:21.919670 containerd[1709]: time="2026-04-17T23:47:21.919332198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:47:21.919670 containerd[1709]: time="2026-04-17T23:47:21.919537401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:47:21.944627 systemd-networkd[1329]: cali154c18b3ca7: Link UP Apr 17 23:47:21.950611 systemd-networkd[1329]: cali154c18b3ca7: Gained carrier Apr 17 23:47:21.951427 systemd[1]: Started cri-containerd-090c1ddd8631fa9a94d854bb5725bca35a7a6a38b426e5a6a20fcb7bb77d87b9.scope - libcontainer container 090c1ddd8631fa9a94d854bb5725bca35a7a6a38b426e5a6a20fcb7bb77d87b9. Apr 17 23:47:21.956252 containerd[1709]: time="2026-04-17T23:47:21.955182791Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:47:21.960440 containerd[1709]: time="2026-04-17T23:47:21.960387663Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 17 23:47:21.963013 containerd[1709]: time="2026-04-17T23:47:21.962904197Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:47:21.982038 containerd[1709]: 2026-04-17 23:47:21.707 [INFO][5440] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--6rrjw-eth0 coredns-66bc5c9577- kube-system 099fd00a-16a5-4661-ac32-2536e3c7653c 1046 0 2026-04-17 23:46:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-7251cc3c8a coredns-66bc5c9577-6rrjw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali154c18b3ca7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="8ae3c57cffd4b527b9615e4f231d8c3434c90dd6633fcf711324d7cf97c6685a" Namespace="kube-system" Pod="coredns-66bc5c9577-6rrjw" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--6rrjw-" Apr 17 23:47:21.982038 containerd[1709]: 2026-04-17 23:47:21.707 [INFO][5440] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8ae3c57cffd4b527b9615e4f231d8c3434c90dd6633fcf711324d7cf97c6685a" Namespace="kube-system" Pod="coredns-66bc5c9577-6rrjw" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--6rrjw-eth0" Apr 17 23:47:21.982038 containerd[1709]: 2026-04-17 23:47:21.830 [INFO][5462] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8ae3c57cffd4b527b9615e4f231d8c3434c90dd6633fcf711324d7cf97c6685a" HandleID="k8s-pod-network.8ae3c57cffd4b527b9615e4f231d8c3434c90dd6633fcf711324d7cf97c6685a" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--6rrjw-eth0" Apr 17 23:47:21.982038 containerd[1709]: 2026-04-17 23:47:21.855 [INFO][5462] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8ae3c57cffd4b527b9615e4f231d8c3434c90dd6633fcf711324d7cf97c6685a" HandleID="k8s-pod-network.8ae3c57cffd4b527b9615e4f231d8c3434c90dd6633fcf711324d7cf97c6685a" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--6rrjw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fb10), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-7251cc3c8a", "pod":"coredns-66bc5c9577-6rrjw", "timestamp":"2026-04-17 23:47:21.830237973 +0000 UTC"}, Hostname:"ci-4081.3.6-n-7251cc3c8a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001889a0)} Apr 17 23:47:21.982038 containerd[1709]: 2026-04-17 23:47:21.855 [INFO][5462] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:47:21.982038 containerd[1709]: 2026-04-17 23:47:21.855 [INFO][5462] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:47:21.982038 containerd[1709]: 2026-04-17 23:47:21.855 [INFO][5462] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-7251cc3c8a' Apr 17 23:47:21.982038 containerd[1709]: 2026-04-17 23:47:21.861 [INFO][5462] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8ae3c57cffd4b527b9615e4f231d8c3434c90dd6633fcf711324d7cf97c6685a" host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:21.982038 containerd[1709]: 2026-04-17 23:47:21.871 [INFO][5462] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:21.982038 containerd[1709]: 2026-04-17 23:47:21.882 [INFO][5462] ipam/ipam.go 526: Trying affinity for 192.168.29.128/26 host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:21.982038 containerd[1709]: 2026-04-17 23:47:21.887 [INFO][5462] ipam/ipam.go 160: Attempting to load block cidr=192.168.29.128/26 host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:21.982038 containerd[1709]: 2026-04-17 23:47:21.897 [INFO][5462] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.29.128/26 host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:21.982038 containerd[1709]: 2026-04-17 23:47:21.898 [INFO][5462] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.29.128/26 handle="k8s-pod-network.8ae3c57cffd4b527b9615e4f231d8c3434c90dd6633fcf711324d7cf97c6685a" host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:21.982038 containerd[1709]: 2026-04-17 23:47:21.901 [INFO][5462] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8ae3c57cffd4b527b9615e4f231d8c3434c90dd6633fcf711324d7cf97c6685a Apr 17 23:47:21.982038 containerd[1709]: 2026-04-17 23:47:21.909 [INFO][5462] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.29.128/26 handle="k8s-pod-network.8ae3c57cffd4b527b9615e4f231d8c3434c90dd6633fcf711324d7cf97c6685a" host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:21.982038 containerd[1709]: 2026-04-17 23:47:21.934 [INFO][5462] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.29.136/26] block=192.168.29.128/26 handle="k8s-pod-network.8ae3c57cffd4b527b9615e4f231d8c3434c90dd6633fcf711324d7cf97c6685a" host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:21.982038 containerd[1709]: 2026-04-17 23:47:21.935 [INFO][5462] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.29.136/26] handle="k8s-pod-network.8ae3c57cffd4b527b9615e4f231d8c3434c90dd6633fcf711324d7cf97c6685a" host="ci-4081.3.6-n-7251cc3c8a" Apr 17 23:47:21.982038 containerd[1709]: 2026-04-17 23:47:21.935 [INFO][5462] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:47:21.982038 containerd[1709]: 2026-04-17 23:47:21.935 [INFO][5462] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.29.136/26] IPv6=[] ContainerID="8ae3c57cffd4b527b9615e4f231d8c3434c90dd6633fcf711324d7cf97c6685a" HandleID="k8s-pod-network.8ae3c57cffd4b527b9615e4f231d8c3434c90dd6633fcf711324d7cf97c6685a" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--6rrjw-eth0" Apr 17 23:47:21.982969 containerd[1709]: 2026-04-17 23:47:21.941 [INFO][5440] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8ae3c57cffd4b527b9615e4f231d8c3434c90dd6633fcf711324d7cf97c6685a" Namespace="kube-system" Pod="coredns-66bc5c9577-6rrjw" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--6rrjw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--6rrjw-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"099fd00a-16a5-4661-ac32-2536e3c7653c", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-7251cc3c8a", ContainerID:"", Pod:"coredns-66bc5c9577-6rrjw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali154c18b3ca7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:47:21.982969 containerd[1709]: 2026-04-17 23:47:21.941 [INFO][5440] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.136/32] ContainerID="8ae3c57cffd4b527b9615e4f231d8c3434c90dd6633fcf711324d7cf97c6685a" Namespace="kube-system" Pod="coredns-66bc5c9577-6rrjw" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--6rrjw-eth0" Apr 17 23:47:21.982969 containerd[1709]: 2026-04-17 23:47:21.941 [INFO][5440] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali154c18b3ca7 ContainerID="8ae3c57cffd4b527b9615e4f231d8c3434c90dd6633fcf711324d7cf97c6685a" Namespace="kube-system" Pod="coredns-66bc5c9577-6rrjw" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--6rrjw-eth0" Apr 17 23:47:21.982969 containerd[1709]: 2026-04-17 23:47:21.945 [INFO][5440] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8ae3c57cffd4b527b9615e4f231d8c3434c90dd6633fcf711324d7cf97c6685a" Namespace="kube-system" Pod="coredns-66bc5c9577-6rrjw" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--6rrjw-eth0" Apr 17 23:47:21.982969 containerd[1709]: 2026-04-17 23:47:21.946 [INFO][5440] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8ae3c57cffd4b527b9615e4f231d8c3434c90dd6633fcf711324d7cf97c6685a" Namespace="kube-system" Pod="coredns-66bc5c9577-6rrjw" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--6rrjw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--6rrjw-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"099fd00a-16a5-4661-ac32-2536e3c7653c", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-7251cc3c8a", ContainerID:"8ae3c57cffd4b527b9615e4f231d8c3434c90dd6633fcf711324d7cf97c6685a", Pod:"coredns-66bc5c9577-6rrjw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali154c18b3ca7", MAC:"3e:1d:bc:39:4c:ab", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:47:21.983304 containerd[1709]: 2026-04-17 23:47:21.974 [INFO][5440] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8ae3c57cffd4b527b9615e4f231d8c3434c90dd6633fcf711324d7cf97c6685a" Namespace="kube-system" Pod="coredns-66bc5c9577-6rrjw" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--6rrjw-eth0" Apr 17 23:47:21.989801 containerd[1709]: time="2026-04-17T23:47:21.989761267Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:47:21.996317 containerd[1709]: time="2026-04-17T23:47:21.995148941Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 5.500362065s" Apr 17 23:47:21.996317 containerd[1709]: time="2026-04-17T23:47:21.995197241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 17 23:47:21.998592 containerd[1709]: time="2026-04-17T23:47:21.998551487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 17 23:47:22.021816 containerd[1709]: time="2026-04-17T23:47:22.021768907Z" level=info msg="CreateContainer within sandbox \"465598e4b02ab4743bf9dbe5a9ce49c4615dfc8fa204d0d99bf2906fbd6ffd4e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 17 23:47:22.033408 containerd[1709]: time="2026-04-17T23:47:22.033365566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vtqds,Uid:d55082e2-e0fa-4118-b796-695fc5437662,Namespace:calico-system,Attempt:1,} returns sandbox id \"090c1ddd8631fa9a94d854bb5725bca35a7a6a38b426e5a6a20fcb7bb77d87b9\"" Apr 17 23:47:22.056230 containerd[1709]: time="2026-04-17T23:47:22.055979677Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:47:22.056638 containerd[1709]: time="2026-04-17T23:47:22.056596186Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:47:22.056854 containerd[1709]: time="2026-04-17T23:47:22.056790588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:47:22.057435 containerd[1709]: time="2026-04-17T23:47:22.057272595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:47:22.094928 systemd[1]: Started cri-containerd-8ae3c57cffd4b527b9615e4f231d8c3434c90dd6633fcf711324d7cf97c6685a.scope - libcontainer container 8ae3c57cffd4b527b9615e4f231d8c3434c90dd6633fcf711324d7cf97c6685a. Apr 17 23:47:22.164493 containerd[1709]: time="2026-04-17T23:47:22.164454069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6rrjw,Uid:099fd00a-16a5-4661-ac32-2536e3c7653c,Namespace:kube-system,Attempt:1,} returns sandbox id \"8ae3c57cffd4b527b9615e4f231d8c3434c90dd6633fcf711324d7cf97c6685a\"" Apr 17 23:47:22.243089 containerd[1709]: time="2026-04-17T23:47:22.243037050Z" level=info msg="CreateContainer within sandbox \"8ae3c57cffd4b527b9615e4f231d8c3434c90dd6633fcf711324d7cf97c6685a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:47:22.244485 containerd[1709]: time="2026-04-17T23:47:22.244440369Z" level=info msg="CreateContainer within sandbox \"465598e4b02ab4743bf9dbe5a9ce49c4615dfc8fa204d0d99bf2906fbd6ffd4e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"3853e4f0d7bef33d9879b56f48452f98c2fae055851cb2a7595c4fdce5221533\"" Apr 17 23:47:22.248567 containerd[1709]: time="2026-04-17T23:47:22.247747815Z" level=info msg="StartContainer for \"3853e4f0d7bef33d9879b56f48452f98c2fae055851cb2a7595c4fdce5221533\"" Apr 17 23:47:22.274891 systemd[1]: Started cri-containerd-3853e4f0d7bef33d9879b56f48452f98c2fae055851cb2a7595c4fdce5221533.scope - libcontainer container 3853e4f0d7bef33d9879b56f48452f98c2fae055851cb2a7595c4fdce5221533. Apr 17 23:47:22.387815 containerd[1709]: time="2026-04-17T23:47:22.386553524Z" level=info msg="StartContainer for \"3853e4f0d7bef33d9879b56f48452f98c2fae055851cb2a7595c4fdce5221533\" returns successfully" Apr 17 23:47:22.691409 kubelet[3225]: I0417 23:47:22.690490 3225 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-9d6dc9bbd-6sslf" podStartSLOduration=50.344357739 podStartE2EDuration="1m3.690445203s" podCreationTimestamp="2026-04-17 23:46:19 +0000 UTC" firstStartedPulling="2026-04-17 23:47:08.652176419 +0000 UTC m=+71.498889626" lastFinishedPulling="2026-04-17 23:47:21.998263783 +0000 UTC m=+84.844977090" observedRunningTime="2026-04-17 23:47:22.689879396 +0000 UTC m=+85.536592703" watchObservedRunningTime="2026-04-17 23:47:22.690445203 +0000 UTC m=+85.537158510" Apr 17 23:47:22.704051 containerd[1709]: time="2026-04-17T23:47:22.703954189Z" level=info msg="CreateContainer within sandbox \"8ae3c57cffd4b527b9615e4f231d8c3434c90dd6633fcf711324d7cf97c6685a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b1a24ce75ccf864e86142b75a2ac94dc66d20af561855f629f5c591e764b50e5\"" Apr 17 23:47:22.712731 containerd[1709]: time="2026-04-17T23:47:22.708979758Z" level=info msg="StartContainer for \"b1a24ce75ccf864e86142b75a2ac94dc66d20af561855f629f5c591e764b50e5\"" Apr 17 23:47:22.783033 systemd[1]: Started cri-containerd-b1a24ce75ccf864e86142b75a2ac94dc66d20af561855f629f5c591e764b50e5.scope - libcontainer container b1a24ce75ccf864e86142b75a2ac94dc66d20af561855f629f5c591e764b50e5. Apr 17 23:47:22.881910 systemd-networkd[1329]: calia86cfa6defa: Gained IPv6LL Apr 17 23:47:22.896803 containerd[1709]: time="2026-04-17T23:47:22.896335035Z" level=info msg="StartContainer for \"b1a24ce75ccf864e86142b75a2ac94dc66d20af561855f629f5c591e764b50e5\" returns successfully" Apr 17 23:47:23.649140 systemd-networkd[1329]: cali154c18b3ca7: Gained IPv6LL Apr 17 23:47:23.689887 kubelet[3225]: I0417 23:47:23.689817 3225 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-6rrjw" podStartSLOduration=79.689781048 podStartE2EDuration="1m19.689781048s" podCreationTimestamp="2026-04-17 23:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:47:23.687444016 +0000 UTC m=+86.534157323" watchObservedRunningTime="2026-04-17 23:47:23.689781048 +0000 UTC m=+86.536494255" Apr 17 23:47:25.671827 containerd[1709]: time="2026-04-17T23:47:25.671698406Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:47:25.674699 containerd[1709]: time="2026-04-17T23:47:25.674647546Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 17 23:47:25.678735 containerd[1709]: time="2026-04-17T23:47:25.678593101Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:47:25.683331 containerd[1709]: time="2026-04-17T23:47:25.683269365Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:47:25.684658 containerd[1709]: time="2026-04-17T23:47:25.684017375Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 3.684039768s" Apr 17 23:47:25.684658 containerd[1709]: time="2026-04-17T23:47:25.684058176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 17 23:47:25.686574 containerd[1709]: time="2026-04-17T23:47:25.685138991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 17 23:47:25.692584 containerd[1709]: time="2026-04-17T23:47:25.692554993Z" level=info msg="CreateContainer within sandbox \"c4325e12f035c800503add42d9d766fb7c3a5d218c60be9ce91496e42d41c4ee\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 23:47:25.722791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount437521715.mount: Deactivated successfully. Apr 17 23:47:25.729613 containerd[1709]: time="2026-04-17T23:47:25.729576002Z" level=info msg="CreateContainer within sandbox \"c4325e12f035c800503add42d9d766fb7c3a5d218c60be9ce91496e42d41c4ee\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"857980d6694c10fbe8d7361273b44e6b7fc1d370ee043cbb36666975579cb98b\"" Apr 17 23:47:25.730312 containerd[1709]: time="2026-04-17T23:47:25.730205011Z" level=info msg="StartContainer for \"857980d6694c10fbe8d7361273b44e6b7fc1d370ee043cbb36666975579cb98b\"" Apr 17 23:47:25.774874 systemd[1]: Started cri-containerd-857980d6694c10fbe8d7361273b44e6b7fc1d370ee043cbb36666975579cb98b.scope - libcontainer container 857980d6694c10fbe8d7361273b44e6b7fc1d370ee043cbb36666975579cb98b. Apr 17 23:47:25.829080 containerd[1709]: time="2026-04-17T23:47:25.828976869Z" level=info msg="StartContainer for \"857980d6694c10fbe8d7361273b44e6b7fc1d370ee043cbb36666975579cb98b\" returns successfully" Apr 17 23:47:25.999248 containerd[1709]: time="2026-04-17T23:47:25.999191710Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:47:26.002507 containerd[1709]: time="2026-04-17T23:47:26.002453255Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 17 23:47:26.004958 containerd[1709]: time="2026-04-17T23:47:26.004917789Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 319.741498ms" Apr 17 23:47:26.005057 containerd[1709]: time="2026-04-17T23:47:26.004969089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 17 23:47:26.009399 containerd[1709]: time="2026-04-17T23:47:26.007322322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 17 23:47:26.015922 containerd[1709]: time="2026-04-17T23:47:26.015895240Z" level=info msg="CreateContainer within sandbox \"686264a36cfdc9a37c44efb3560fb627f692c961e28e49d676b8ff0a3450aa45\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 23:47:26.058325 containerd[1709]: time="2026-04-17T23:47:26.058278423Z" level=info msg="CreateContainer within sandbox \"686264a36cfdc9a37c44efb3560fb627f692c961e28e49d676b8ff0a3450aa45\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a355a068e97a9ecbe539dca1bd11289fa0a80a7911bf981892c225827fe19b8b\"" Apr 17 23:47:26.059487 containerd[1709]: time="2026-04-17T23:47:26.059449539Z" level=info msg="StartContainer for \"a355a068e97a9ecbe539dca1bd11289fa0a80a7911bf981892c225827fe19b8b\"" Apr 17 23:47:26.095891 systemd[1]: Started cri-containerd-a355a068e97a9ecbe539dca1bd11289fa0a80a7911bf981892c225827fe19b8b.scope - libcontainer container a355a068e97a9ecbe539dca1bd11289fa0a80a7911bf981892c225827fe19b8b. Apr 17 23:47:26.153359 containerd[1709]: time="2026-04-17T23:47:26.153310430Z" level=info msg="StartContainer for \"a355a068e97a9ecbe539dca1bd11289fa0a80a7911bf981892c225827fe19b8b\" returns successfully" Apr 17 23:47:26.738026 kubelet[3225]: I0417 23:47:26.737955 3225 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-5f7ccb55fc-dvppd" podStartSLOduration=51.646307096 podStartE2EDuration="1m8.73793417s" podCreationTimestamp="2026-04-17 23:46:18 +0000 UTC" firstStartedPulling="2026-04-17 23:47:08.91441623 +0000 UTC m=+71.761129437" lastFinishedPulling="2026-04-17 23:47:26.006043304 +0000 UTC m=+88.852756511" observedRunningTime="2026-04-17 23:47:26.703450296 +0000 UTC m=+89.550163503" watchObservedRunningTime="2026-04-17 23:47:26.73793417 +0000 UTC m=+89.584647377" Apr 17 23:47:27.064184 kubelet[3225]: I0417 23:47:27.063042 3225 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-5f7ccb55fc-wnsrc" podStartSLOduration=52.078604101 podStartE2EDuration="1m9.063018841s" podCreationTimestamp="2026-04-17 23:46:18 +0000 UTC" firstStartedPulling="2026-04-17 23:47:08.700548348 +0000 UTC m=+71.547261555" lastFinishedPulling="2026-04-17 23:47:25.684963088 +0000 UTC m=+88.531676295" observedRunningTime="2026-04-17 23:47:26.737865369 +0000 UTC m=+89.584578576" watchObservedRunningTime="2026-04-17 23:47:27.063018841 +0000 UTC m=+89.909732048" Apr 17 23:47:27.420630 containerd[1709]: time="2026-04-17T23:47:27.420185754Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:47:27.423970 containerd[1709]: time="2026-04-17T23:47:27.423287896Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 17 23:47:27.426674 containerd[1709]: time="2026-04-17T23:47:27.426635942Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:47:27.431734 containerd[1709]: time="2026-04-17T23:47:27.431565610Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:47:27.432324 containerd[1709]: time="2026-04-17T23:47:27.432293520Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.424936098s" Apr 17 23:47:27.432533 containerd[1709]: time="2026-04-17T23:47:27.432429822Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 17 23:47:27.435107 containerd[1709]: time="2026-04-17T23:47:27.434922856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 17 23:47:27.443150 containerd[1709]: time="2026-04-17T23:47:27.443002367Z" level=info msg="CreateContainer within sandbox \"90ce2406bfff9af824abdef3313c7e109a5c61833ea3d0c2c0ee021fc79ab4ca\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 17 23:47:27.483941 containerd[1709]: time="2026-04-17T23:47:27.483895630Z" level=info msg="CreateContainer within sandbox \"90ce2406bfff9af824abdef3313c7e109a5c61833ea3d0c2c0ee021fc79ab4ca\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"c6dedeab3ad3509c7b2197d1e98b22451f674e611bec4b3da9ae960a80e9dfdf\"" Apr 17 23:47:27.484598 containerd[1709]: time="2026-04-17T23:47:27.484559439Z" level=info msg="StartContainer for \"c6dedeab3ad3509c7b2197d1e98b22451f674e611bec4b3da9ae960a80e9dfdf\"" Apr 17 23:47:27.533881 systemd[1]: Started cri-containerd-c6dedeab3ad3509c7b2197d1e98b22451f674e611bec4b3da9ae960a80e9dfdf.scope - libcontainer container c6dedeab3ad3509c7b2197d1e98b22451f674e611bec4b3da9ae960a80e9dfdf. Apr 17 23:47:27.593387 containerd[1709]: time="2026-04-17T23:47:27.593227133Z" level=info msg="StartContainer for \"c6dedeab3ad3509c7b2197d1e98b22451f674e611bec4b3da9ae960a80e9dfdf\" returns successfully" Apr 17 23:47:29.935332 containerd[1709]: time="2026-04-17T23:47:29.935271191Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:47:31.626985 containerd[1709]: time="2026-04-17T23:47:31.626901133Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 17 23:47:32.093845 containerd[1709]: time="2026-04-17T23:47:32.093760575Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:47:32.135262 containerd[1709]: time="2026-04-17T23:47:32.135200546Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:47:32.137501 containerd[1709]: time="2026-04-17T23:47:32.137031572Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 4.702074514s" Apr 17 23:47:32.137501 containerd[1709]: time="2026-04-17T23:47:32.137076672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 17 23:47:32.141915 containerd[1709]: time="2026-04-17T23:47:32.139828110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 17 23:47:32.190522 containerd[1709]: time="2026-04-17T23:47:32.190462109Z" level=info msg="CreateContainer within sandbox \"090c1ddd8631fa9a94d854bb5725bca35a7a6a38b426e5a6a20fcb7bb77d87b9\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 17 23:47:32.401153 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3993315491.mount: Deactivated successfully. Apr 17 23:47:32.535320 containerd[1709]: time="2026-04-17T23:47:32.535265067Z" level=info msg="CreateContainer within sandbox \"090c1ddd8631fa9a94d854bb5725bca35a7a6a38b426e5a6a20fcb7bb77d87b9\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1edab8f7a682a2d9678846aaff621c59c47de0f4f6de3dff65f2f2f0e999c6a2\"" Apr 17 23:47:32.536240 containerd[1709]: time="2026-04-17T23:47:32.536187379Z" level=info msg="StartContainer for \"1edab8f7a682a2d9678846aaff621c59c47de0f4f6de3dff65f2f2f0e999c6a2\"" Apr 17 23:47:32.615897 systemd[1]: Started cri-containerd-1edab8f7a682a2d9678846aaff621c59c47de0f4f6de3dff65f2f2f0e999c6a2.scope - libcontainer container 1edab8f7a682a2d9678846aaff621c59c47de0f4f6de3dff65f2f2f0e999c6a2. Apr 17 23:47:32.646186 containerd[1709]: time="2026-04-17T23:47:32.646149297Z" level=info msg="StartContainer for \"1edab8f7a682a2d9678846aaff621c59c47de0f4f6de3dff65f2f2f0e999c6a2\" returns successfully" Apr 17 23:47:35.365570 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount543185877.mount: Deactivated successfully. Apr 17 23:47:35.886166 containerd[1709]: time="2026-04-17T23:47:35.886088703Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:47:35.934050 containerd[1709]: time="2026-04-17T23:47:35.933748860Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 17 23:47:35.982371 containerd[1709]: time="2026-04-17T23:47:35.982263630Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:47:36.028989 containerd[1709]: time="2026-04-17T23:47:36.028893373Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:47:36.030137 containerd[1709]: time="2026-04-17T23:47:36.029791485Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 3.889923674s" Apr 17 23:47:36.030137 containerd[1709]: time="2026-04-17T23:47:36.029839186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 17 23:47:36.032765 containerd[1709]: time="2026-04-17T23:47:36.031646311Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 17 23:47:36.093655 containerd[1709]: time="2026-04-17T23:47:36.093435564Z" level=info msg="CreateContainer within sandbox \"90ce2406bfff9af824abdef3313c7e109a5c61833ea3d0c2c0ee021fc79ab4ca\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 17 23:47:36.442087 containerd[1709]: time="2026-04-17T23:47:36.442042274Z" level=info msg="CreateContainer within sandbox \"90ce2406bfff9af824abdef3313c7e109a5c61833ea3d0c2c0ee021fc79ab4ca\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"77b3e999bcae135860a1f2d7213f4c403f13ec9c573183c2c046858bb326763c\"" Apr 17 23:47:36.443127 containerd[1709]: time="2026-04-17T23:47:36.443053388Z" level=info msg="StartContainer for \"77b3e999bcae135860a1f2d7213f4c403f13ec9c573183c2c046858bb326763c\"" Apr 17 23:47:36.486858 systemd[1]: Started cri-containerd-77b3e999bcae135860a1f2d7213f4c403f13ec9c573183c2c046858bb326763c.scope - libcontainer container 77b3e999bcae135860a1f2d7213f4c403f13ec9c573183c2c046858bb326763c. Apr 17 23:47:36.537793 containerd[1709]: time="2026-04-17T23:47:36.537744794Z" level=info msg="StartContainer for \"77b3e999bcae135860a1f2d7213f4c403f13ec9c573183c2c046858bb326763c\" returns successfully" Apr 17 23:47:38.680530 kubelet[3225]: I0417 23:47:38.680460 3225 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-85cd49d475-2mdcj" podStartSLOduration=4.509588023 podStartE2EDuration="30.680440096s" podCreationTimestamp="2026-04-17 23:47:08 +0000 UTC" firstStartedPulling="2026-04-17 23:47:09.860352532 +0000 UTC m=+72.707065739" lastFinishedPulling="2026-04-17 23:47:36.031204605 +0000 UTC m=+98.877917812" observedRunningTime="2026-04-17 23:47:36.734481509 +0000 UTC m=+99.581194816" watchObservedRunningTime="2026-04-17 23:47:38.680440096 +0000 UTC m=+101.527153403" Apr 17 23:47:39.589996 containerd[1709]: time="2026-04-17T23:47:39.589930210Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:47:39.634897 containerd[1709]: time="2026-04-17T23:47:39.634720121Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 17 23:47:39.638184 containerd[1709]: time="2026-04-17T23:47:39.638115567Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:47:39.728815 containerd[1709]: time="2026-04-17T23:47:39.728728704Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:47:39.730399 containerd[1709]: time="2026-04-17T23:47:39.729601316Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 3.697915205s" Apr 17 23:47:39.730399 containerd[1709]: time="2026-04-17T23:47:39.729645217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 17 23:47:39.736939 containerd[1709]: time="2026-04-17T23:47:39.736907016Z" level=info msg="CreateContainer within sandbox \"090c1ddd8631fa9a94d854bb5725bca35a7a6a38b426e5a6a20fcb7bb77d87b9\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 17 23:47:40.034121 containerd[1709]: time="2026-04-17T23:47:40.034070272Z" level=info msg="CreateContainer within sandbox \"090c1ddd8631fa9a94d854bb5725bca35a7a6a38b426e5a6a20fcb7bb77d87b9\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4540022531b592c71ec4ec597dc5a3d1c48e3aa99a8559d49564251caf963c85\"" Apr 17 23:47:40.035011 containerd[1709]: time="2026-04-17T23:47:40.034891583Z" level=info msg="StartContainer for \"4540022531b592c71ec4ec597dc5a3d1c48e3aa99a8559d49564251caf963c85\"" Apr 17 23:47:40.093081 systemd[1]: Started cri-containerd-4540022531b592c71ec4ec597dc5a3d1c48e3aa99a8559d49564251caf963c85.scope - libcontainer container 4540022531b592c71ec4ec597dc5a3d1c48e3aa99a8559d49564251caf963c85. Apr 17 23:47:40.156202 containerd[1709]: time="2026-04-17T23:47:40.156148638Z" level=info msg="StartContainer for \"4540022531b592c71ec4ec597dc5a3d1c48e3aa99a8559d49564251caf963c85\" returns successfully" Apr 17 23:47:40.459447 kubelet[3225]: I0417 23:47:40.458965 3225 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 17 23:47:40.459447 kubelet[3225]: I0417 23:47:40.459011 3225 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 17 23:47:40.754132 kubelet[3225]: I0417 23:47:40.753491 3225 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-vtqds" podStartSLOduration=64.064528541 podStartE2EDuration="1m21.753471591s" podCreationTimestamp="2026-04-17 23:46:19 +0000 UTC" firstStartedPulling="2026-04-17 23:47:22.041595679 +0000 UTC m=+84.888308886" lastFinishedPulling="2026-04-17 23:47:39.730538729 +0000 UTC m=+102.577251936" observedRunningTime="2026-04-17 23:47:40.752544078 +0000 UTC m=+103.599257285" watchObservedRunningTime="2026-04-17 23:47:40.753471591 +0000 UTC m=+103.600184898" Apr 17 23:47:52.691679 systemd[1]: run-containerd-runc-k8s.io-3853e4f0d7bef33d9879b56f48452f98c2fae055851cb2a7595c4fdce5221533-runc.1t8cfp.mount: Deactivated successfully. Apr 17 23:47:57.346309 containerd[1709]: time="2026-04-17T23:47:57.346095433Z" level=info msg="StopPodSandbox for \"85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476\"" Apr 17 23:47:59.239849 containerd[1709]: 2026-04-17 23:47:57.381 [WARNING][6097] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--7251cc3c8a-k8s-csi--node--driver--vtqds-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d55082e2-e0fa-4118-b796-695fc5437662", ResourceVersion:"1174", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-7251cc3c8a", ContainerID:"090c1ddd8631fa9a94d854bb5725bca35a7a6a38b426e5a6a20fcb7bb77d87b9", Pod:"csi-node-driver-vtqds", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.29.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia86cfa6defa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:47:59.239849 containerd[1709]: 2026-04-17 23:47:57.381 [INFO][6097] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476" Apr 17 23:47:59.239849 containerd[1709]: 2026-04-17 23:47:57.381 [INFO][6097] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476" iface="eth0" netns="" Apr 17 23:47:59.239849 containerd[1709]: 2026-04-17 23:47:57.381 [INFO][6097] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476" Apr 17 23:47:59.239849 containerd[1709]: 2026-04-17 23:47:57.381 [INFO][6097] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476" Apr 17 23:47:59.239849 containerd[1709]: 2026-04-17 23:47:57.406 [INFO][6105] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476" HandleID="k8s-pod-network.85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-csi--node--driver--vtqds-eth0" Apr 17 23:47:59.239849 containerd[1709]: 2026-04-17 23:47:57.406 [INFO][6105] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:47:59.239849 containerd[1709]: 2026-04-17 23:47:57.406 [INFO][6105] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:47:59.239849 containerd[1709]: 2026-04-17 23:47:57.412 [WARNING][6105] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476" HandleID="k8s-pod-network.85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-csi--node--driver--vtqds-eth0" Apr 17 23:47:59.239849 containerd[1709]: 2026-04-17 23:47:57.412 [INFO][6105] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476" HandleID="k8s-pod-network.85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-csi--node--driver--vtqds-eth0" Apr 17 23:47:59.239849 containerd[1709]: 2026-04-17 23:47:57.413 [INFO][6105] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:47:59.239849 containerd[1709]: 2026-04-17 23:47:57.414 [INFO][6097] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476" Apr 17 23:47:59.239849 containerd[1709]: time="2026-04-17T23:47:57.415788673Z" level=info msg="TearDown network for sandbox \"85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476\" successfully" Apr 17 23:47:59.239849 containerd[1709]: time="2026-04-17T23:47:57.415853374Z" level=info msg="StopPodSandbox for \"85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476\" returns successfully" Apr 17 23:47:59.239849 containerd[1709]: time="2026-04-17T23:47:57.416579284Z" level=info msg="RemovePodSandbox for \"85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476\"" Apr 17 23:47:59.239849 containerd[1709]: time="2026-04-17T23:47:57.416611184Z" level=info msg="Forcibly stopping sandbox \"85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476\"" Apr 17 23:47:59.242075 containerd[1709]: 2026-04-17 23:47:57.458 [WARNING][6119] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--7251cc3c8a-k8s-csi--node--driver--vtqds-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d55082e2-e0fa-4118-b796-695fc5437662", ResourceVersion:"1174", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-7251cc3c8a", ContainerID:"090c1ddd8631fa9a94d854bb5725bca35a7a6a38b426e5a6a20fcb7bb77d87b9", Pod:"csi-node-driver-vtqds", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.29.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia86cfa6defa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:47:59.242075 containerd[1709]: 2026-04-17 23:47:57.459 [INFO][6119] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476" Apr 17 23:47:59.242075 containerd[1709]: 2026-04-17 23:47:57.459 [INFO][6119] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476" iface="eth0" netns="" Apr 17 23:47:59.242075 containerd[1709]: 2026-04-17 23:47:57.459 [INFO][6119] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476" Apr 17 23:47:59.242075 containerd[1709]: 2026-04-17 23:47:57.459 [INFO][6119] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476" Apr 17 23:47:59.242075 containerd[1709]: 2026-04-17 23:47:57.482 [INFO][6127] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476" HandleID="k8s-pod-network.85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-csi--node--driver--vtqds-eth0" Apr 17 23:47:59.242075 containerd[1709]: 2026-04-17 23:47:57.482 [INFO][6127] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:47:59.242075 containerd[1709]: 2026-04-17 23:47:57.482 [INFO][6127] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:47:59.242075 containerd[1709]: 2026-04-17 23:47:57.488 [WARNING][6127] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476" HandleID="k8s-pod-network.85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-csi--node--driver--vtqds-eth0" Apr 17 23:47:59.242075 containerd[1709]: 2026-04-17 23:47:57.488 [INFO][6127] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476" HandleID="k8s-pod-network.85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-csi--node--driver--vtqds-eth0" Apr 17 23:47:59.242075 containerd[1709]: 2026-04-17 23:47:57.489 [INFO][6127] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:47:59.242075 containerd[1709]: 2026-04-17 23:47:57.490 [INFO][6119] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476" Apr 17 23:47:59.242075 containerd[1709]: time="2026-04-17T23:47:57.491832599Z" level=info msg="TearDown network for sandbox \"85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476\" successfully" Apr 17 23:47:59.982701 containerd[1709]: time="2026-04-17T23:47:59.982647514Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:47:59.982909 containerd[1709]: time="2026-04-17T23:47:59.982770715Z" level=info msg="RemovePodSandbox \"85ab84e7b12aa22c683b2ee8c51a702c72f8dd7d1f116bbd41a6ddd18786b476\" returns successfully" Apr 17 23:47:59.983541 containerd[1709]: time="2026-04-17T23:47:59.983500625Z" level=info msg="StopPodSandbox for \"b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892\"" Apr 17 23:48:00.052633 containerd[1709]: 2026-04-17 23:48:00.019 [WARNING][6141] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--6rrjw-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"099fd00a-16a5-4661-ac32-2536e3c7653c", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-7251cc3c8a", ContainerID:"8ae3c57cffd4b527b9615e4f231d8c3434c90dd6633fcf711324d7cf97c6685a", Pod:"coredns-66bc5c9577-6rrjw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali154c18b3ca7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:48:00.052633 containerd[1709]: 2026-04-17 23:48:00.019 [INFO][6141] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892" Apr 17 23:48:00.052633 containerd[1709]: 2026-04-17 23:48:00.019 [INFO][6141] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892" iface="eth0" netns="" Apr 17 23:48:00.052633 containerd[1709]: 2026-04-17 23:48:00.019 [INFO][6141] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892" Apr 17 23:48:00.052633 containerd[1709]: 2026-04-17 23:48:00.019 [INFO][6141] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892" Apr 17 23:48:00.052633 containerd[1709]: 2026-04-17 23:48:00.042 [INFO][6148] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892" HandleID="k8s-pod-network.b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--6rrjw-eth0" Apr 17 23:48:00.052633 containerd[1709]: 2026-04-17 23:48:00.042 [INFO][6148] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:48:00.052633 containerd[1709]: 2026-04-17 23:48:00.042 [INFO][6148] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:48:00.052633 containerd[1709]: 2026-04-17 23:48:00.048 [WARNING][6148] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892" HandleID="k8s-pod-network.b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--6rrjw-eth0" Apr 17 23:48:00.052633 containerd[1709]: 2026-04-17 23:48:00.048 [INFO][6148] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892" HandleID="k8s-pod-network.b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--6rrjw-eth0" Apr 17 23:48:00.052633 containerd[1709]: 2026-04-17 23:48:00.050 [INFO][6148] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:48:00.052633 containerd[1709]: 2026-04-17 23:48:00.051 [INFO][6141] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892" Apr 17 23:48:00.052633 containerd[1709]: time="2026-04-17T23:48:00.052479256Z" level=info msg="TearDown network for sandbox \"b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892\" successfully" Apr 17 23:48:00.052633 containerd[1709]: time="2026-04-17T23:48:00.052513456Z" level=info msg="StopPodSandbox for \"b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892\" returns successfully" Apr 17 23:48:00.053534 containerd[1709]: time="2026-04-17T23:48:00.053504570Z" level=info msg="RemovePodSandbox for \"b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892\"" Apr 17 23:48:00.053621 containerd[1709]: time="2026-04-17T23:48:00.053542670Z" level=info msg="Forcibly stopping sandbox \"b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892\"" Apr 17 23:48:00.153027 containerd[1709]: 2026-04-17 23:48:00.097 [WARNING][6163] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--6rrjw-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"099fd00a-16a5-4661-ac32-2536e3c7653c", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-7251cc3c8a", ContainerID:"8ae3c57cffd4b527b9615e4f231d8c3434c90dd6633fcf711324d7cf97c6685a", Pod:"coredns-66bc5c9577-6rrjw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali154c18b3ca7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:48:00.153027 containerd[1709]: 2026-04-17 23:48:00.097 [INFO][6163] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892" Apr 17 23:48:00.153027 containerd[1709]: 2026-04-17 23:48:00.097 [INFO][6163] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892" iface="eth0" netns="" Apr 17 23:48:00.153027 containerd[1709]: 2026-04-17 23:48:00.098 [INFO][6163] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892" Apr 17 23:48:00.153027 containerd[1709]: 2026-04-17 23:48:00.098 [INFO][6163] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892" Apr 17 23:48:00.153027 containerd[1709]: 2026-04-17 23:48:00.121 [INFO][6171] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892" HandleID="k8s-pod-network.b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--6rrjw-eth0" Apr 17 23:48:00.153027 containerd[1709]: 2026-04-17 23:48:00.122 [INFO][6171] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:48:00.153027 containerd[1709]: 2026-04-17 23:48:00.122 [INFO][6171] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:48:00.153027 containerd[1709]: 2026-04-17 23:48:00.143 [WARNING][6171] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892" HandleID="k8s-pod-network.b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--6rrjw-eth0" Apr 17 23:48:00.153027 containerd[1709]: 2026-04-17 23:48:00.143 [INFO][6171] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892" HandleID="k8s-pod-network.b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--6rrjw-eth0" Apr 17 23:48:00.153027 containerd[1709]: 2026-04-17 23:48:00.147 [INFO][6171] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:48:00.153027 containerd[1709]: 2026-04-17 23:48:00.149 [INFO][6163] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892" Apr 17 23:48:00.155581 containerd[1709]: time="2026-04-17T23:48:00.154807337Z" level=info msg="TearDown network for sandbox \"b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892\" successfully" Apr 17 23:48:03.581695 containerd[1709]: time="2026-04-17T23:48:03.581636481Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:48:03.582321 containerd[1709]: time="2026-04-17T23:48:03.581746382Z" level=info msg="RemovePodSandbox \"b035605e7e01971b377d7379637b9872ca4a3f65130178aa7f08fbf4b457f892\" returns successfully" Apr 17 23:48:03.582321 containerd[1709]: time="2026-04-17T23:48:03.582301190Z" level=info msg="StopPodSandbox for \"a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7\"" Apr 17 23:48:03.648777 containerd[1709]: 2026-04-17 23:48:03.616 [WARNING][6192] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--wnsrc-eth0", GenerateName:"calico-apiserver-5f7ccb55fc-", Namespace:"calico-system", SelfLink:"", UID:"d0a502cd-d376-426a-82a8-7e44f6c46407", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f7ccb55fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-7251cc3c8a", ContainerID:"c4325e12f035c800503add42d9d766fb7c3a5d218c60be9ce91496e42d41c4ee", Pod:"calico-apiserver-5f7ccb55fc-wnsrc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali684756efbe2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:48:03.648777 containerd[1709]: 2026-04-17 23:48:03.616 [INFO][6192] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7" Apr 17 23:48:03.648777 containerd[1709]: 2026-04-17 23:48:03.616 [INFO][6192] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7" iface="eth0" netns="" Apr 17 23:48:03.648777 containerd[1709]: 2026-04-17 23:48:03.616 [INFO][6192] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7" Apr 17 23:48:03.648777 containerd[1709]: 2026-04-17 23:48:03.616 [INFO][6192] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7" Apr 17 23:48:03.648777 containerd[1709]: 2026-04-17 23:48:03.639 [INFO][6199] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7" HandleID="k8s-pod-network.a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--wnsrc-eth0" Apr 17 23:48:03.648777 containerd[1709]: 2026-04-17 23:48:03.639 [INFO][6199] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:48:03.648777 containerd[1709]: 2026-04-17 23:48:03.639 [INFO][6199] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:48:03.648777 containerd[1709]: 2026-04-17 23:48:03.644 [WARNING][6199] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7" HandleID="k8s-pod-network.a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--wnsrc-eth0" Apr 17 23:48:03.648777 containerd[1709]: 2026-04-17 23:48:03.644 [INFO][6199] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7" HandleID="k8s-pod-network.a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--wnsrc-eth0" Apr 17 23:48:03.648777 containerd[1709]: 2026-04-17 23:48:03.646 [INFO][6199] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:48:03.648777 containerd[1709]: 2026-04-17 23:48:03.647 [INFO][6192] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7" Apr 17 23:48:03.649422 containerd[1709]: time="2026-04-17T23:48:03.648818772Z" level=info msg="TearDown network for sandbox \"a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7\" successfully" Apr 17 23:48:03.649422 containerd[1709]: time="2026-04-17T23:48:03.648848873Z" level=info msg="StopPodSandbox for \"a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7\" returns successfully" Apr 17 23:48:03.649422 containerd[1709]: time="2026-04-17T23:48:03.649381980Z" level=info msg="RemovePodSandbox for \"a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7\"" Apr 17 23:48:03.649422 containerd[1709]: time="2026-04-17T23:48:03.649416080Z" level=info msg="Forcibly stopping sandbox \"a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7\"" Apr 17 23:48:03.714832 containerd[1709]: 2026-04-17 23:48:03.682 [WARNING][6213] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--wnsrc-eth0", GenerateName:"calico-apiserver-5f7ccb55fc-", Namespace:"calico-system", SelfLink:"", UID:"d0a502cd-d376-426a-82a8-7e44f6c46407", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f7ccb55fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-7251cc3c8a", ContainerID:"c4325e12f035c800503add42d9d766fb7c3a5d218c60be9ce91496e42d41c4ee", Pod:"calico-apiserver-5f7ccb55fc-wnsrc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali684756efbe2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:48:03.714832 containerd[1709]: 2026-04-17 23:48:03.683 [INFO][6213] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7" Apr 17 23:48:03.714832 containerd[1709]: 2026-04-17 23:48:03.683 [INFO][6213] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7" iface="eth0" netns="" Apr 17 23:48:03.714832 containerd[1709]: 2026-04-17 23:48:03.683 [INFO][6213] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7" Apr 17 23:48:03.714832 containerd[1709]: 2026-04-17 23:48:03.683 [INFO][6213] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7" Apr 17 23:48:03.714832 containerd[1709]: 2026-04-17 23:48:03.705 [INFO][6221] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7" HandleID="k8s-pod-network.a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--wnsrc-eth0" Apr 17 23:48:03.714832 containerd[1709]: 2026-04-17 23:48:03.705 [INFO][6221] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:48:03.714832 containerd[1709]: 2026-04-17 23:48:03.705 [INFO][6221] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:48:03.714832 containerd[1709]: 2026-04-17 23:48:03.711 [WARNING][6221] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7" HandleID="k8s-pod-network.a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--wnsrc-eth0" Apr 17 23:48:03.714832 containerd[1709]: 2026-04-17 23:48:03.711 [INFO][6221] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7" HandleID="k8s-pod-network.a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--wnsrc-eth0" Apr 17 23:48:03.714832 containerd[1709]: 2026-04-17 23:48:03.712 [INFO][6221] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:48:03.714832 containerd[1709]: 2026-04-17 23:48:03.713 [INFO][6213] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7" Apr 17 23:48:03.715525 containerd[1709]: time="2026-04-17T23:48:03.714849248Z" level=info msg="TearDown network for sandbox \"a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7\" successfully" Apr 17 23:48:03.726603 containerd[1709]: time="2026-04-17T23:48:03.726418602Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:48:03.726603 containerd[1709]: time="2026-04-17T23:48:03.726501403Z" level=info msg="RemovePodSandbox \"a667a19196feb8b1ff85733fdd4f0da62ac390592b2a3ae982574d68e71503a7\" returns successfully" Apr 17 23:48:03.727179 containerd[1709]: time="2026-04-17T23:48:03.727131111Z" level=info msg="StopPodSandbox for \"c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b\"" Apr 17 23:48:03.799237 containerd[1709]: 2026-04-17 23:48:03.761 [WARNING][6235] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--7251cc3c8a-k8s-goldmane--cccfbd5cf--ql4z8-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"8a1318ab-ffd2-447a-b894-d520f5a1dd65", ResourceVersion:"1200", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-7251cc3c8a", ContainerID:"86f86b092192c99db0ff99a99a8c04e4d97353a905ca0cc143d7e4835c0a07ef", Pod:"goldmane-cccfbd5cf-ql4z8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.29.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7b4a9b097b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:48:03.799237 containerd[1709]: 2026-04-17 23:48:03.762 [INFO][6235] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b" Apr 17 23:48:03.799237 containerd[1709]: 2026-04-17 23:48:03.763 [INFO][6235] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b" iface="eth0" netns="" Apr 17 23:48:03.799237 containerd[1709]: 2026-04-17 23:48:03.763 [INFO][6235] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b" Apr 17 23:48:03.799237 containerd[1709]: 2026-04-17 23:48:03.763 [INFO][6235] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b" Apr 17 23:48:03.799237 containerd[1709]: 2026-04-17 23:48:03.786 [INFO][6242] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b" HandleID="k8s-pod-network.c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-goldmane--cccfbd5cf--ql4z8-eth0" Apr 17 23:48:03.799237 containerd[1709]: 2026-04-17 23:48:03.787 [INFO][6242] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:48:03.799237 containerd[1709]: 2026-04-17 23:48:03.787 [INFO][6242] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:48:03.799237 containerd[1709]: 2026-04-17 23:48:03.793 [WARNING][6242] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b" HandleID="k8s-pod-network.c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-goldmane--cccfbd5cf--ql4z8-eth0" Apr 17 23:48:03.799237 containerd[1709]: 2026-04-17 23:48:03.793 [INFO][6242] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b" HandleID="k8s-pod-network.c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-goldmane--cccfbd5cf--ql4z8-eth0" Apr 17 23:48:03.799237 containerd[1709]: 2026-04-17 23:48:03.795 [INFO][6242] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:48:03.799237 containerd[1709]: 2026-04-17 23:48:03.797 [INFO][6235] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b" Apr 17 23:48:03.799728 containerd[1709]: time="2026-04-17T23:48:03.799263768Z" level=info msg="TearDown network for sandbox \"c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b\" successfully" Apr 17 23:48:03.799728 containerd[1709]: time="2026-04-17T23:48:03.799286368Z" level=info msg="StopPodSandbox for \"c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b\" returns successfully" Apr 17 23:48:03.800224 containerd[1709]: time="2026-04-17T23:48:03.800190180Z" level=info msg="RemovePodSandbox for \"c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b\"" Apr 17 23:48:03.800224 containerd[1709]: time="2026-04-17T23:48:03.800228381Z" level=info msg="Forcibly stopping sandbox \"c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b\"" Apr 17 23:48:03.868554 containerd[1709]: 2026-04-17 23:48:03.835 [WARNING][6257] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--7251cc3c8a-k8s-goldmane--cccfbd5cf--ql4z8-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"8a1318ab-ffd2-447a-b894-d520f5a1dd65", ResourceVersion:"1200", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-7251cc3c8a", ContainerID:"86f86b092192c99db0ff99a99a8c04e4d97353a905ca0cc143d7e4835c0a07ef", Pod:"goldmane-cccfbd5cf-ql4z8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.29.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7b4a9b097b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:48:03.868554 containerd[1709]: 2026-04-17 23:48:03.835 [INFO][6257] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b" Apr 17 23:48:03.868554 containerd[1709]: 2026-04-17 23:48:03.835 [INFO][6257] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b" iface="eth0" netns="" Apr 17 23:48:03.868554 containerd[1709]: 2026-04-17 23:48:03.835 [INFO][6257] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b" Apr 17 23:48:03.868554 containerd[1709]: 2026-04-17 23:48:03.835 [INFO][6257] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b" Apr 17 23:48:03.868554 containerd[1709]: 2026-04-17 23:48:03.858 [INFO][6264] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b" HandleID="k8s-pod-network.c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-goldmane--cccfbd5cf--ql4z8-eth0" Apr 17 23:48:03.868554 containerd[1709]: 2026-04-17 23:48:03.858 [INFO][6264] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:48:03.868554 containerd[1709]: 2026-04-17 23:48:03.858 [INFO][6264] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:48:03.868554 containerd[1709]: 2026-04-17 23:48:03.864 [WARNING][6264] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b" HandleID="k8s-pod-network.c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-goldmane--cccfbd5cf--ql4z8-eth0" Apr 17 23:48:03.868554 containerd[1709]: 2026-04-17 23:48:03.864 [INFO][6264] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b" HandleID="k8s-pod-network.c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-goldmane--cccfbd5cf--ql4z8-eth0" Apr 17 23:48:03.868554 containerd[1709]: 2026-04-17 23:48:03.865 [INFO][6264] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:48:03.868554 containerd[1709]: 2026-04-17 23:48:03.867 [INFO][6257] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b" Apr 17 23:48:03.868554 containerd[1709]: time="2026-04-17T23:48:03.868506087Z" level=info msg="TearDown network for sandbox \"c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b\" successfully" Apr 17 23:48:03.879058 containerd[1709]: time="2026-04-17T23:48:03.878994126Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:48:03.879218 containerd[1709]: time="2026-04-17T23:48:03.879084127Z" level=info msg="RemovePodSandbox \"c1e60f4b84eedfd8c73b7ac2fe8d52368844aa7b7e4fa9976036c05ad92e877b\" returns successfully" Apr 17 23:48:03.879721 containerd[1709]: time="2026-04-17T23:48:03.879684035Z" level=info msg="StopPodSandbox for \"445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb\"" Apr 17 23:48:03.946902 containerd[1709]: 2026-04-17 23:48:03.915 [WARNING][6279] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--8dtqd-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a634795c-d355-406a-b830-50fc4384862e", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-7251cc3c8a", ContainerID:"b4d6dd8f986429bce97a4b4a5edf1cba9985313fd15eebcc764f597822f81c04", Pod:"coredns-66bc5c9577-8dtqd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali139a73999ba", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:48:03.946902 containerd[1709]: 2026-04-17 23:48:03.915 [INFO][6279] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb" Apr 17 23:48:03.946902 containerd[1709]: 2026-04-17 23:48:03.915 [INFO][6279] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb" iface="eth0" netns="" Apr 17 23:48:03.946902 containerd[1709]: 2026-04-17 23:48:03.915 [INFO][6279] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb" Apr 17 23:48:03.946902 containerd[1709]: 2026-04-17 23:48:03.915 [INFO][6279] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb" Apr 17 23:48:03.946902 containerd[1709]: 2026-04-17 23:48:03.935 [INFO][6286] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb" HandleID="k8s-pod-network.445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--8dtqd-eth0" Apr 17 23:48:03.946902 containerd[1709]: 2026-04-17 23:48:03.935 [INFO][6286] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:48:03.946902 containerd[1709]: 2026-04-17 23:48:03.935 [INFO][6286] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:48:03.946902 containerd[1709]: 2026-04-17 23:48:03.942 [WARNING][6286] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb" HandleID="k8s-pod-network.445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--8dtqd-eth0" Apr 17 23:48:03.946902 containerd[1709]: 2026-04-17 23:48:03.943 [INFO][6286] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb" HandleID="k8s-pod-network.445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--8dtqd-eth0" Apr 17 23:48:03.946902 containerd[1709]: 2026-04-17 23:48:03.944 [INFO][6286] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:48:03.946902 containerd[1709]: 2026-04-17 23:48:03.945 [INFO][6279] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb" Apr 17 23:48:03.947432 containerd[1709]: time="2026-04-17T23:48:03.946936127Z" level=info msg="TearDown network for sandbox \"445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb\" successfully" Apr 17 23:48:03.947432 containerd[1709]: time="2026-04-17T23:48:03.946966628Z" level=info msg="StopPodSandbox for \"445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb\" returns successfully" Apr 17 23:48:03.948113 containerd[1709]: time="2026-04-17T23:48:03.948060142Z" level=info msg="RemovePodSandbox for \"445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb\"" Apr 17 23:48:03.948113 containerd[1709]: time="2026-04-17T23:48:03.948104943Z" level=info msg="Forcibly stopping sandbox \"445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb\"" Apr 17 23:48:04.014373 containerd[1709]: 2026-04-17 23:48:03.982 [WARNING][6300] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--8dtqd-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a634795c-d355-406a-b830-50fc4384862e", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-7251cc3c8a", ContainerID:"b4d6dd8f986429bce97a4b4a5edf1cba9985313fd15eebcc764f597822f81c04", Pod:"coredns-66bc5c9577-8dtqd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali139a73999ba", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:48:04.014373 containerd[1709]: 2026-04-17 23:48:03.982 [INFO][6300] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb" Apr 17 23:48:04.014373 containerd[1709]: 2026-04-17 23:48:03.982 [INFO][6300] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb" iface="eth0" netns="" Apr 17 23:48:04.014373 containerd[1709]: 2026-04-17 23:48:03.982 [INFO][6300] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb" Apr 17 23:48:04.014373 containerd[1709]: 2026-04-17 23:48:03.982 [INFO][6300] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb" Apr 17 23:48:04.014373 containerd[1709]: 2026-04-17 23:48:04.005 [INFO][6307] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb" HandleID="k8s-pod-network.445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--8dtqd-eth0" Apr 17 23:48:04.014373 containerd[1709]: 2026-04-17 23:48:04.005 [INFO][6307] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:48:04.014373 containerd[1709]: 2026-04-17 23:48:04.005 [INFO][6307] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:48:04.014373 containerd[1709]: 2026-04-17 23:48:04.010 [WARNING][6307] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb" HandleID="k8s-pod-network.445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--8dtqd-eth0" Apr 17 23:48:04.014373 containerd[1709]: 2026-04-17 23:48:04.010 [INFO][6307] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb" HandleID="k8s-pod-network.445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-coredns--66bc5c9577--8dtqd-eth0" Apr 17 23:48:04.014373 containerd[1709]: 2026-04-17 23:48:04.012 [INFO][6307] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:48:04.014373 containerd[1709]: 2026-04-17 23:48:04.013 [INFO][6300] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb" Apr 17 23:48:04.015112 containerd[1709]: time="2026-04-17T23:48:04.014573925Z" level=info msg="TearDown network for sandbox \"445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb\" successfully" Apr 17 23:48:04.022421 containerd[1709]: time="2026-04-17T23:48:04.022378728Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:48:04.022537 containerd[1709]: time="2026-04-17T23:48:04.022459229Z" level=info msg="RemovePodSandbox \"445bda02da49e631dc3ee70140d937af4da0a9d57072333b692d7d861b3072bb\" returns successfully" Apr 17 23:48:04.023065 containerd[1709]: time="2026-04-17T23:48:04.023033837Z" level=info msg="StopPodSandbox for \"cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e\"" Apr 17 23:48:04.089332 containerd[1709]: 2026-04-17 23:48:04.056 [WARNING][6321] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--7251cc3c8a-k8s-calico--kube--controllers--9d6dc9bbd--6sslf-eth0", GenerateName:"calico-kube-controllers-9d6dc9bbd-", Namespace:"calico-system", SelfLink:"", UID:"5f78a19d-36c0-4d0e-b651-34be58a4bc17", ResourceVersion:"1067", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9d6dc9bbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-7251cc3c8a", ContainerID:"465598e4b02ab4743bf9dbe5a9ce49c4615dfc8fa204d0d99bf2906fbd6ffd4e", Pod:"calico-kube-controllers-9d6dc9bbd-6sslf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.29.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic0915fbd163", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:48:04.089332 containerd[1709]: 2026-04-17 23:48:04.056 [INFO][6321] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e" Apr 17 23:48:04.089332 containerd[1709]: 2026-04-17 23:48:04.057 [INFO][6321] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e" iface="eth0" netns="" Apr 17 23:48:04.089332 containerd[1709]: 2026-04-17 23:48:04.057 [INFO][6321] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e" Apr 17 23:48:04.089332 containerd[1709]: 2026-04-17 23:48:04.057 [INFO][6321] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e" Apr 17 23:48:04.089332 containerd[1709]: 2026-04-17 23:48:04.077 [INFO][6328] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e" HandleID="k8s-pod-network.cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-calico--kube--controllers--9d6dc9bbd--6sslf-eth0" Apr 17 23:48:04.089332 containerd[1709]: 2026-04-17 23:48:04.077 [INFO][6328] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:48:04.089332 containerd[1709]: 2026-04-17 23:48:04.078 [INFO][6328] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:48:04.089332 containerd[1709]: 2026-04-17 23:48:04.085 [WARNING][6328] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e" HandleID="k8s-pod-network.cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-calico--kube--controllers--9d6dc9bbd--6sslf-eth0" Apr 17 23:48:04.089332 containerd[1709]: 2026-04-17 23:48:04.085 [INFO][6328] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e" HandleID="k8s-pod-network.cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-calico--kube--controllers--9d6dc9bbd--6sslf-eth0" Apr 17 23:48:04.089332 containerd[1709]: 2026-04-17 23:48:04.086 [INFO][6328] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:48:04.089332 containerd[1709]: 2026-04-17 23:48:04.088 [INFO][6321] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e" Apr 17 23:48:04.089993 containerd[1709]: time="2026-04-17T23:48:04.089404417Z" level=info msg="TearDown network for sandbox \"cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e\" successfully" Apr 17 23:48:04.089993 containerd[1709]: time="2026-04-17T23:48:04.089436718Z" level=info msg="StopPodSandbox for \"cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e\" returns successfully" Apr 17 23:48:04.090075 containerd[1709]: time="2026-04-17T23:48:04.090007625Z" level=info msg="RemovePodSandbox for \"cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e\"" Apr 17 23:48:04.090075 containerd[1709]: time="2026-04-17T23:48:04.090041126Z" level=info msg="Forcibly stopping sandbox \"cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e\"" Apr 17 23:48:04.159314 containerd[1709]: 2026-04-17 23:48:04.125 [WARNING][6342] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--7251cc3c8a-k8s-calico--kube--controllers--9d6dc9bbd--6sslf-eth0", GenerateName:"calico-kube-controllers-9d6dc9bbd-", Namespace:"calico-system", SelfLink:"", UID:"5f78a19d-36c0-4d0e-b651-34be58a4bc17", ResourceVersion:"1067", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9d6dc9bbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-7251cc3c8a", ContainerID:"465598e4b02ab4743bf9dbe5a9ce49c4615dfc8fa204d0d99bf2906fbd6ffd4e", Pod:"calico-kube-controllers-9d6dc9bbd-6sslf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.29.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic0915fbd163", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:48:04.159314 containerd[1709]: 2026-04-17 23:48:04.125 [INFO][6342] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e" Apr 17 23:48:04.159314 containerd[1709]: 2026-04-17 23:48:04.125 [INFO][6342] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e" iface="eth0" netns="" Apr 17 23:48:04.159314 containerd[1709]: 2026-04-17 23:48:04.125 [INFO][6342] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e" Apr 17 23:48:04.159314 containerd[1709]: 2026-04-17 23:48:04.125 [INFO][6342] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e" Apr 17 23:48:04.159314 containerd[1709]: 2026-04-17 23:48:04.146 [INFO][6349] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e" HandleID="k8s-pod-network.cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-calico--kube--controllers--9d6dc9bbd--6sslf-eth0" Apr 17 23:48:04.159314 containerd[1709]: 2026-04-17 23:48:04.147 [INFO][6349] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:48:04.159314 containerd[1709]: 2026-04-17 23:48:04.147 [INFO][6349] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:48:04.159314 containerd[1709]: 2026-04-17 23:48:04.153 [WARNING][6349] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e" HandleID="k8s-pod-network.cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-calico--kube--controllers--9d6dc9bbd--6sslf-eth0" Apr 17 23:48:04.159314 containerd[1709]: 2026-04-17 23:48:04.153 [INFO][6349] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e" HandleID="k8s-pod-network.cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-calico--kube--controllers--9d6dc9bbd--6sslf-eth0" Apr 17 23:48:04.159314 containerd[1709]: 2026-04-17 23:48:04.154 [INFO][6349] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:48:04.159314 containerd[1709]: 2026-04-17 23:48:04.155 [INFO][6342] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e" Apr 17 23:48:04.159314 containerd[1709]: time="2026-04-17T23:48:04.157598122Z" level=info msg="TearDown network for sandbox \"cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e\" successfully" Apr 17 23:48:04.164604 containerd[1709]: time="2026-04-17T23:48:04.164559015Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:48:04.164812 containerd[1709]: time="2026-04-17T23:48:04.164643816Z" level=info msg="RemovePodSandbox \"cca7ff02c3c9465b52d54646ccfeb86daf2db0fe66c8c4a4f184ee4e796e632e\" returns successfully" Apr 17 23:48:04.165249 containerd[1709]: time="2026-04-17T23:48:04.165220523Z" level=info msg="StopPodSandbox for \"51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff\"" Apr 17 23:48:04.232343 containerd[1709]: 2026-04-17 23:48:04.197 [WARNING][6363] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--dvppd-eth0", GenerateName:"calico-apiserver-5f7ccb55fc-", Namespace:"calico-system", SelfLink:"", UID:"7034716d-ae1e-4a07-9d47-901268c9b69a", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f7ccb55fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-7251cc3c8a", ContainerID:"686264a36cfdc9a37c44efb3560fb627f692c961e28e49d676b8ff0a3450aa45", Pod:"calico-apiserver-5f7ccb55fc-dvppd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia5f84671acc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:48:04.232343 containerd[1709]: 2026-04-17 23:48:04.197 [INFO][6363] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff" Apr 17 23:48:04.232343 containerd[1709]: 2026-04-17 23:48:04.197 [INFO][6363] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff" iface="eth0" netns="" Apr 17 23:48:04.232343 containerd[1709]: 2026-04-17 23:48:04.197 [INFO][6363] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff" Apr 17 23:48:04.232343 containerd[1709]: 2026-04-17 23:48:04.198 [INFO][6363] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff" Apr 17 23:48:04.232343 containerd[1709]: 2026-04-17 23:48:04.219 [INFO][6370] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff" HandleID="k8s-pod-network.51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--dvppd-eth0" Apr 17 23:48:04.232343 containerd[1709]: 2026-04-17 23:48:04.219 [INFO][6370] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:48:04.232343 containerd[1709]: 2026-04-17 23:48:04.219 [INFO][6370] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:48:04.232343 containerd[1709]: 2026-04-17 23:48:04.228 [WARNING][6370] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff" HandleID="k8s-pod-network.51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--dvppd-eth0" Apr 17 23:48:04.232343 containerd[1709]: 2026-04-17 23:48:04.228 [INFO][6370] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff" HandleID="k8s-pod-network.51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--dvppd-eth0" Apr 17 23:48:04.232343 containerd[1709]: 2026-04-17 23:48:04.229 [INFO][6370] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:48:04.232343 containerd[1709]: 2026-04-17 23:48:04.231 [INFO][6363] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff" Apr 17 23:48:04.233531 containerd[1709]: time="2026-04-17T23:48:04.232395314Z" level=info msg="TearDown network for sandbox \"51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff\" successfully" Apr 17 23:48:04.233531 containerd[1709]: time="2026-04-17T23:48:04.232426015Z" level=info msg="StopPodSandbox for \"51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff\" returns successfully" Apr 17 23:48:04.233531 containerd[1709]: time="2026-04-17T23:48:04.233181925Z" level=info msg="RemovePodSandbox for \"51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff\"" Apr 17 23:48:04.233531 containerd[1709]: time="2026-04-17T23:48:04.233214425Z" level=info msg="Forcibly stopping sandbox \"51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff\"" Apr 17 23:48:04.313243 containerd[1709]: 2026-04-17 23:48:04.271 [WARNING][6384] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--dvppd-eth0", GenerateName:"calico-apiserver-5f7ccb55fc-", Namespace:"calico-system", SelfLink:"", UID:"7034716d-ae1e-4a07-9d47-901268c9b69a", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 46, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f7ccb55fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-7251cc3c8a", ContainerID:"686264a36cfdc9a37c44efb3560fb627f692c961e28e49d676b8ff0a3450aa45", Pod:"calico-apiserver-5f7ccb55fc-dvppd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia5f84671acc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:48:04.313243 containerd[1709]: 2026-04-17 23:48:04.272 [INFO][6384] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff" Apr 17 23:48:04.313243 containerd[1709]: 2026-04-17 23:48:04.272 [INFO][6384] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff" iface="eth0" netns="" Apr 17 23:48:04.313243 containerd[1709]: 2026-04-17 23:48:04.272 [INFO][6384] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff" Apr 17 23:48:04.313243 containerd[1709]: 2026-04-17 23:48:04.272 [INFO][6384] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff" Apr 17 23:48:04.313243 containerd[1709]: 2026-04-17 23:48:04.296 [INFO][6392] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff" HandleID="k8s-pod-network.51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--dvppd-eth0" Apr 17 23:48:04.313243 containerd[1709]: 2026-04-17 23:48:04.296 [INFO][6392] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:48:04.313243 containerd[1709]: 2026-04-17 23:48:04.296 [INFO][6392] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:48:04.313243 containerd[1709]: 2026-04-17 23:48:04.308 [WARNING][6392] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff" HandleID="k8s-pod-network.51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--dvppd-eth0" Apr 17 23:48:04.313243 containerd[1709]: 2026-04-17 23:48:04.308 [INFO][6392] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff" HandleID="k8s-pod-network.51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-calico--apiserver--5f7ccb55fc--dvppd-eth0" Apr 17 23:48:04.313243 containerd[1709]: 2026-04-17 23:48:04.310 [INFO][6392] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:48:04.313243 containerd[1709]: 2026-04-17 23:48:04.311 [INFO][6384] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff" Apr 17 23:48:04.314081 containerd[1709]: time="2026-04-17T23:48:04.313271387Z" level=info msg="TearDown network for sandbox \"51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff\" successfully" Apr 17 23:48:04.335599 containerd[1709]: time="2026-04-17T23:48:04.335552983Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:48:04.335753 containerd[1709]: time="2026-04-17T23:48:04.335634484Z" level=info msg="RemovePodSandbox \"51a8a4b3823b6ab6c929d157be6a4fa14c6968f00b451edb754ae7ed3199edff\" returns successfully" Apr 17 23:48:04.336284 containerd[1709]: time="2026-04-17T23:48:04.336232092Z" level=info msg="StopPodSandbox for \"b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc\"" Apr 17 23:48:04.404562 containerd[1709]: 2026-04-17 23:48:04.370 [WARNING][6406] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-whisker--f6b6dc666--6z9zk-eth0" Apr 17 23:48:04.404562 containerd[1709]: 2026-04-17 23:48:04.370 [INFO][6406] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc" Apr 17 23:48:04.404562 containerd[1709]: 2026-04-17 23:48:04.370 [INFO][6406] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc" iface="eth0" netns="" Apr 17 23:48:04.404562 containerd[1709]: 2026-04-17 23:48:04.370 [INFO][6406] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc" Apr 17 23:48:04.404562 containerd[1709]: 2026-04-17 23:48:04.370 [INFO][6406] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc" Apr 17 23:48:04.404562 containerd[1709]: 2026-04-17 23:48:04.392 [INFO][6414] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc" HandleID="k8s-pod-network.b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-whisker--f6b6dc666--6z9zk-eth0" Apr 17 23:48:04.404562 containerd[1709]: 2026-04-17 23:48:04.392 [INFO][6414] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:48:04.404562 containerd[1709]: 2026-04-17 23:48:04.392 [INFO][6414] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:48:04.404562 containerd[1709]: 2026-04-17 23:48:04.399 [WARNING][6414] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc" HandleID="k8s-pod-network.b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-whisker--f6b6dc666--6z9zk-eth0" Apr 17 23:48:04.404562 containerd[1709]: 2026-04-17 23:48:04.399 [INFO][6414] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc" HandleID="k8s-pod-network.b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-whisker--f6b6dc666--6z9zk-eth0" Apr 17 23:48:04.404562 containerd[1709]: 2026-04-17 23:48:04.401 [INFO][6414] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:48:04.404562 containerd[1709]: 2026-04-17 23:48:04.403 [INFO][6406] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc" Apr 17 23:48:04.405204 containerd[1709]: time="2026-04-17T23:48:04.404621399Z" level=info msg="TearDown network for sandbox \"b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc\" successfully" Apr 17 23:48:04.405204 containerd[1709]: time="2026-04-17T23:48:04.404653500Z" level=info msg="StopPodSandbox for \"b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc\" returns successfully" Apr 17 23:48:04.405286 containerd[1709]: time="2026-04-17T23:48:04.405241908Z" level=info msg="RemovePodSandbox for \"b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc\"" Apr 17 23:48:04.405286 containerd[1709]: time="2026-04-17T23:48:04.405275308Z" level=info msg="Forcibly stopping sandbox \"b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc\"" Apr 17 23:48:04.471734 containerd[1709]: 2026-04-17 23:48:04.438 [WARNING][6429] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc" WorkloadEndpoint="ci--4081.3.6--n--7251cc3c8a-k8s-whisker--f6b6dc666--6z9zk-eth0" Apr 17 23:48:04.471734 containerd[1709]: 2026-04-17 23:48:04.438 [INFO][6429] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc" Apr 17 23:48:04.471734 containerd[1709]: 2026-04-17 23:48:04.438 [INFO][6429] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc" iface="eth0" netns="" Apr 17 23:48:04.471734 containerd[1709]: 2026-04-17 23:48:04.438 [INFO][6429] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc" Apr 17 23:48:04.471734 containerd[1709]: 2026-04-17 23:48:04.438 [INFO][6429] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc" Apr 17 23:48:04.471734 containerd[1709]: 2026-04-17 23:48:04.460 [INFO][6436] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc" HandleID="k8s-pod-network.b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-whisker--f6b6dc666--6z9zk-eth0" Apr 17 23:48:04.471734 containerd[1709]: 2026-04-17 23:48:04.460 [INFO][6436] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:48:04.471734 containerd[1709]: 2026-04-17 23:48:04.460 [INFO][6436] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:48:04.471734 containerd[1709]: 2026-04-17 23:48:04.467 [WARNING][6436] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc" HandleID="k8s-pod-network.b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-whisker--f6b6dc666--6z9zk-eth0" Apr 17 23:48:04.471734 containerd[1709]: 2026-04-17 23:48:04.467 [INFO][6436] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc" HandleID="k8s-pod-network.b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc" Workload="ci--4081.3.6--n--7251cc3c8a-k8s-whisker--f6b6dc666--6z9zk-eth0" Apr 17 23:48:04.471734 containerd[1709]: 2026-04-17 23:48:04.469 [INFO][6436] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:48:04.471734 containerd[1709]: 2026-04-17 23:48:04.470 [INFO][6429] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc" Apr 17 23:48:04.471734 containerd[1709]: time="2026-04-17T23:48:04.471681189Z" level=info msg="TearDown network for sandbox \"b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc\" successfully" Apr 17 23:48:04.479205 containerd[1709]: time="2026-04-17T23:48:04.479159088Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:48:04.479366 containerd[1709]: time="2026-04-17T23:48:04.479251889Z" level=info msg="RemovePodSandbox \"b6fba560062d3e49b646a76189dfca2ca2e3f7db2b35c477731421bd5be491fc\" returns successfully" Apr 17 23:48:16.025845 systemd[1]: Started sshd@7-10.0.0.19:22-20.229.252.112:49238.service - OpenSSH per-connection server daemon (20.229.252.112:49238). Apr 17 23:48:16.168248 sshd[6508]: Accepted publickey for core from 20.229.252.112 port 49238 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:48:16.169846 sshd[6508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:48:16.173834 systemd-logind[1683]: New session 10 of user core. Apr 17 23:48:16.177889 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 17 23:48:16.348344 sshd[6508]: pam_unix(sshd:session): session closed for user core Apr 17 23:48:16.351959 systemd[1]: sshd@7-10.0.0.19:22-20.229.252.112:49238.service: Deactivated successfully. Apr 17 23:48:16.354513 systemd[1]: session-10.scope: Deactivated successfully. Apr 17 23:48:16.356155 systemd-logind[1683]: Session 10 logged out. Waiting for processes to exit. Apr 17 23:48:16.357900 systemd-logind[1683]: Removed session 10. Apr 17 23:48:21.382132 systemd[1]: Started sshd@8-10.0.0.19:22-20.229.252.112:49246.service - OpenSSH per-connection server daemon (20.229.252.112:49246). Apr 17 23:48:21.499652 sshd[6554]: Accepted publickey for core from 20.229.252.112 port 49246 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:48:21.501534 sshd[6554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:48:21.505975 systemd-logind[1683]: New session 11 of user core. Apr 17 23:48:21.512866 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 17 23:48:21.667417 sshd[6554]: pam_unix(sshd:session): session closed for user core Apr 17 23:48:21.671691 systemd-logind[1683]: Session 11 logged out. Waiting for processes to exit. Apr 17 23:48:21.672289 systemd[1]: sshd@8-10.0.0.19:22-20.229.252.112:49246.service: Deactivated successfully. Apr 17 23:48:21.674571 systemd[1]: session-11.scope: Deactivated successfully. Apr 17 23:48:21.675674 systemd-logind[1683]: Removed session 11. Apr 17 23:48:26.701109 systemd[1]: Started sshd@9-10.0.0.19:22-20.229.252.112:45456.service - OpenSSH per-connection server daemon (20.229.252.112:45456). Apr 17 23:48:26.814626 sshd[6587]: Accepted publickey for core from 20.229.252.112 port 45456 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:48:26.816213 sshd[6587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:48:26.820882 systemd-logind[1683]: New session 12 of user core. Apr 17 23:48:26.827214 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 17 23:48:26.984680 sshd[6587]: pam_unix(sshd:session): session closed for user core Apr 17 23:48:26.987880 systemd[1]: sshd@9-10.0.0.19:22-20.229.252.112:45456.service: Deactivated successfully. Apr 17 23:48:26.990317 systemd[1]: session-12.scope: Deactivated successfully. Apr 17 23:48:26.992018 systemd-logind[1683]: Session 12 logged out. Waiting for processes to exit. Apr 17 23:48:26.993647 systemd-logind[1683]: Removed session 12. Apr 17 23:48:32.015179 systemd[1]: Started sshd@10-10.0.0.19:22-20.229.252.112:45464.service - OpenSSH per-connection server daemon (20.229.252.112:45464). Apr 17 23:48:32.134657 sshd[6615]: Accepted publickey for core from 20.229.252.112 port 45464 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:48:32.136339 sshd[6615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:48:32.141879 systemd-logind[1683]: New session 13 of user core. Apr 17 23:48:32.143910 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 17 23:48:32.299613 sshd[6615]: pam_unix(sshd:session): session closed for user core Apr 17 23:48:32.302977 systemd[1]: sshd@10-10.0.0.19:22-20.229.252.112:45464.service: Deactivated successfully. Apr 17 23:48:32.305360 systemd[1]: session-13.scope: Deactivated successfully. Apr 17 23:48:32.307228 systemd-logind[1683]: Session 13 logged out. Waiting for processes to exit. Apr 17 23:48:32.308474 systemd-logind[1683]: Removed session 13. Apr 17 23:48:37.335074 systemd[1]: Started sshd@11-10.0.0.19:22-20.229.252.112:47610.service - OpenSSH per-connection server daemon (20.229.252.112:47610). Apr 17 23:48:37.459404 sshd[6631]: Accepted publickey for core from 20.229.252.112 port 47610 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:48:37.461014 sshd[6631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:48:37.465783 systemd-logind[1683]: New session 14 of user core. Apr 17 23:48:37.473873 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 17 23:48:37.630396 sshd[6631]: pam_unix(sshd:session): session closed for user core Apr 17 23:48:37.634904 systemd-logind[1683]: Session 14 logged out. Waiting for processes to exit. Apr 17 23:48:37.635547 systemd[1]: sshd@11-10.0.0.19:22-20.229.252.112:47610.service: Deactivated successfully. Apr 17 23:48:37.638290 systemd[1]: session-14.scope: Deactivated successfully. Apr 17 23:48:37.639382 systemd-logind[1683]: Removed session 14. Apr 17 23:48:42.665033 systemd[1]: Started sshd@12-10.0.0.19:22-20.229.252.112:47620.service - OpenSSH per-connection server daemon (20.229.252.112:47620). Apr 17 23:48:42.781020 sshd[6672]: Accepted publickey for core from 20.229.252.112 port 47620 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:48:42.783263 sshd[6672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:48:42.790454 systemd-logind[1683]: New session 15 of user core. Apr 17 23:48:42.793934 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 17 23:48:42.952672 sshd[6672]: pam_unix(sshd:session): session closed for user core Apr 17 23:48:42.956883 systemd-logind[1683]: Session 15 logged out. Waiting for processes to exit. Apr 17 23:48:42.957398 systemd[1]: sshd@12-10.0.0.19:22-20.229.252.112:47620.service: Deactivated successfully. Apr 17 23:48:42.960325 systemd[1]: session-15.scope: Deactivated successfully. Apr 17 23:48:42.961396 systemd-logind[1683]: Removed session 15. Apr 17 23:48:47.984010 systemd[1]: Started sshd@13-10.0.0.19:22-20.229.252.112:57276.service - OpenSSH per-connection server daemon (20.229.252.112:57276). Apr 17 23:48:48.098733 sshd[6720]: Accepted publickey for core from 20.229.252.112 port 57276 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:48:48.100278 sshd[6720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:48:48.104798 systemd-logind[1683]: New session 16 of user core. Apr 17 23:48:48.110861 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 17 23:48:48.263446 sshd[6720]: pam_unix(sshd:session): session closed for user core Apr 17 23:48:48.267263 systemd[1]: sshd@13-10.0.0.19:22-20.229.252.112:57276.service: Deactivated successfully. Apr 17 23:48:48.269619 systemd[1]: session-16.scope: Deactivated successfully. Apr 17 23:48:48.271167 systemd-logind[1683]: Session 16 logged out. Waiting for processes to exit. Apr 17 23:48:48.272465 systemd-logind[1683]: Removed session 16. Apr 17 23:48:50.418614 systemd[1]: run-containerd-runc-k8s.io-22d25d5ce8d4dcecfc0b0fffe99df5492443d45c43d1396f5b9f8c6ccc8861d1-runc.eGX6rm.mount: Deactivated successfully. Apr 17 23:48:53.298006 systemd[1]: Started sshd@14-10.0.0.19:22-20.229.252.112:57282.service - OpenSSH per-connection server daemon (20.229.252.112:57282). Apr 17 23:48:53.414765 sshd[6788]: Accepted publickey for core from 20.229.252.112 port 57282 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:48:53.415374 sshd[6788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:48:53.419523 systemd-logind[1683]: New session 17 of user core. Apr 17 23:48:53.422901 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 17 23:48:53.580221 sshd[6788]: pam_unix(sshd:session): session closed for user core Apr 17 23:48:53.584355 systemd[1]: sshd@14-10.0.0.19:22-20.229.252.112:57282.service: Deactivated successfully. Apr 17 23:48:53.587202 systemd[1]: session-17.scope: Deactivated successfully. Apr 17 23:48:53.588409 systemd-logind[1683]: Session 17 logged out. Waiting for processes to exit. Apr 17 23:48:53.589431 systemd-logind[1683]: Removed session 17. Apr 17 23:48:53.609999 systemd[1]: Started sshd@15-10.0.0.19:22-20.229.252.112:57292.service - OpenSSH per-connection server daemon (20.229.252.112:57292). Apr 17 23:48:53.725104 sshd[6802]: Accepted publickey for core from 20.229.252.112 port 57292 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:48:53.726742 sshd[6802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:48:53.731922 systemd-logind[1683]: New session 18 of user core. Apr 17 23:48:53.734896 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 17 23:48:53.927133 sshd[6802]: pam_unix(sshd:session): session closed for user core Apr 17 23:48:53.933097 systemd[1]: sshd@15-10.0.0.19:22-20.229.252.112:57292.service: Deactivated successfully. Apr 17 23:48:53.939232 systemd[1]: session-18.scope: Deactivated successfully. Apr 17 23:48:53.945908 systemd-logind[1683]: Session 18 logged out. Waiting for processes to exit. Apr 17 23:48:53.962192 systemd-logind[1683]: Removed session 18. Apr 17 23:48:53.967190 systemd[1]: Started sshd@16-10.0.0.19:22-20.229.252.112:57300.service - OpenSSH per-connection server daemon (20.229.252.112:57300). Apr 17 23:48:54.085370 sshd[6813]: Accepted publickey for core from 20.229.252.112 port 57300 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:48:54.086550 sshd[6813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:48:54.090766 systemd-logind[1683]: New session 19 of user core. Apr 17 23:48:54.094897 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 17 23:48:54.249650 sshd[6813]: pam_unix(sshd:session): session closed for user core Apr 17 23:48:54.253304 systemd[1]: sshd@16-10.0.0.19:22-20.229.252.112:57300.service: Deactivated successfully. Apr 17 23:48:54.256132 systemd[1]: session-19.scope: Deactivated successfully. Apr 17 23:48:54.257106 systemd-logind[1683]: Session 19 logged out. Waiting for processes to exit. Apr 17 23:48:54.258159 systemd-logind[1683]: Removed session 19. Apr 17 23:48:59.279008 systemd[1]: Started sshd@17-10.0.0.19:22-20.229.252.112:51738.service - OpenSSH per-connection server daemon (20.229.252.112:51738). Apr 17 23:48:59.395588 sshd[6827]: Accepted publickey for core from 20.229.252.112 port 51738 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:48:59.397141 sshd[6827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:48:59.401804 systemd-logind[1683]: New session 20 of user core. Apr 17 23:48:59.408023 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 17 23:48:59.565214 sshd[6827]: pam_unix(sshd:session): session closed for user core Apr 17 23:48:59.569531 systemd-logind[1683]: Session 20 logged out. Waiting for processes to exit. Apr 17 23:48:59.570191 systemd[1]: sshd@17-10.0.0.19:22-20.229.252.112:51738.service: Deactivated successfully. Apr 17 23:48:59.572265 systemd[1]: session-20.scope: Deactivated successfully. Apr 17 23:48:59.573577 systemd-logind[1683]: Removed session 20. Apr 17 23:49:04.600025 systemd[1]: Started sshd@18-10.0.0.19:22-20.229.252.112:51748.service - OpenSSH per-connection server daemon (20.229.252.112:51748). Apr 17 23:49:04.717276 sshd[6840]: Accepted publickey for core from 20.229.252.112 port 51748 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:49:04.717899 sshd[6840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:49:04.722996 systemd-logind[1683]: New session 21 of user core. Apr 17 23:49:04.727873 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 17 23:49:04.884241 sshd[6840]: pam_unix(sshd:session): session closed for user core Apr 17 23:49:04.888410 systemd[1]: sshd@18-10.0.0.19:22-20.229.252.112:51748.service: Deactivated successfully. Apr 17 23:49:04.890659 systemd[1]: session-21.scope: Deactivated successfully. Apr 17 23:49:04.891807 systemd-logind[1683]: Session 21 logged out. Waiting for processes to exit. Apr 17 23:49:04.892833 systemd-logind[1683]: Removed session 21. Apr 17 23:49:04.912041 systemd[1]: Started sshd@19-10.0.0.19:22-20.229.252.112:52972.service - OpenSSH per-connection server daemon (20.229.252.112:52972). Apr 17 23:49:05.034121 sshd[6855]: Accepted publickey for core from 20.229.252.112 port 52972 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:49:05.035635 sshd[6855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:49:05.040544 systemd-logind[1683]: New session 22 of user core. Apr 17 23:49:05.045892 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 17 23:49:05.270997 sshd[6855]: pam_unix(sshd:session): session closed for user core Apr 17 23:49:05.275204 systemd-logind[1683]: Session 22 logged out. Waiting for processes to exit. Apr 17 23:49:05.276152 systemd[1]: sshd@19-10.0.0.19:22-20.229.252.112:52972.service: Deactivated successfully. Apr 17 23:49:05.278563 systemd[1]: session-22.scope: Deactivated successfully. Apr 17 23:49:05.279685 systemd-logind[1683]: Removed session 22. Apr 17 23:49:05.299035 systemd[1]: Started sshd@20-10.0.0.19:22-20.229.252.112:52976.service - OpenSSH per-connection server daemon (20.229.252.112:52976). Apr 17 23:49:05.431163 sshd[6866]: Accepted publickey for core from 20.229.252.112 port 52976 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:49:05.432862 sshd[6866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:49:05.437607 systemd-logind[1683]: New session 23 of user core. Apr 17 23:49:05.441894 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 17 23:49:06.176130 sshd[6866]: pam_unix(sshd:session): session closed for user core Apr 17 23:49:06.182149 systemd-logind[1683]: Session 23 logged out. Waiting for processes to exit. Apr 17 23:49:06.183142 systemd[1]: sshd@20-10.0.0.19:22-20.229.252.112:52976.service: Deactivated successfully. Apr 17 23:49:06.185684 systemd[1]: session-23.scope: Deactivated successfully. Apr 17 23:49:06.188139 systemd-logind[1683]: Removed session 23. Apr 17 23:49:06.207013 systemd[1]: Started sshd@21-10.0.0.19:22-20.229.252.112:52984.service - OpenSSH per-connection server daemon (20.229.252.112:52984). Apr 17 23:49:06.328765 sshd[6890]: Accepted publickey for core from 20.229.252.112 port 52984 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:49:06.328607 sshd[6890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:49:06.341921 systemd-logind[1683]: New session 24 of user core. Apr 17 23:49:06.348887 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 17 23:49:06.637404 sshd[6890]: pam_unix(sshd:session): session closed for user core Apr 17 23:49:06.643791 systemd-logind[1683]: Session 24 logged out. Waiting for processes to exit. Apr 17 23:49:06.644278 systemd[1]: sshd@21-10.0.0.19:22-20.229.252.112:52984.service: Deactivated successfully. Apr 17 23:49:06.647368 systemd[1]: session-24.scope: Deactivated successfully. Apr 17 23:49:06.648349 systemd-logind[1683]: Removed session 24. Apr 17 23:49:06.668073 systemd[1]: Started sshd@22-10.0.0.19:22-20.229.252.112:52986.service - OpenSSH per-connection server daemon (20.229.252.112:52986). Apr 17 23:49:06.785345 sshd[6941]: Accepted publickey for core from 20.229.252.112 port 52986 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:49:06.787011 sshd[6941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:49:06.792462 systemd-logind[1683]: New session 25 of user core. Apr 17 23:49:06.795866 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 17 23:49:06.950344 sshd[6941]: pam_unix(sshd:session): session closed for user core Apr 17 23:49:06.954805 systemd[1]: sshd@22-10.0.0.19:22-20.229.252.112:52986.service: Deactivated successfully. Apr 17 23:49:06.957386 systemd[1]: session-25.scope: Deactivated successfully. Apr 17 23:49:06.958435 systemd-logind[1683]: Session 25 logged out. Waiting for processes to exit. Apr 17 23:49:06.959908 systemd-logind[1683]: Removed session 25. Apr 17 23:49:11.985749 systemd[1]: Started sshd@23-10.0.0.19:22-20.229.252.112:53000.service - OpenSSH per-connection server daemon (20.229.252.112:53000). Apr 17 23:49:12.101213 sshd[6976]: Accepted publickey for core from 20.229.252.112 port 53000 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:49:12.103054 sshd[6976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:49:12.108558 systemd-logind[1683]: New session 26 of user core. Apr 17 23:49:12.115887 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 17 23:49:12.269893 sshd[6976]: pam_unix(sshd:session): session closed for user core Apr 17 23:49:12.273275 systemd-logind[1683]: Session 26 logged out. Waiting for processes to exit. Apr 17 23:49:12.273959 systemd[1]: sshd@23-10.0.0.19:22-20.229.252.112:53000.service: Deactivated successfully. Apr 17 23:49:12.276647 systemd[1]: session-26.scope: Deactivated successfully. Apr 17 23:49:12.278937 systemd-logind[1683]: Removed session 26. Apr 17 23:49:17.303038 systemd[1]: Started sshd@24-10.0.0.19:22-20.229.252.112:48296.service - OpenSSH per-connection server daemon (20.229.252.112:48296). Apr 17 23:49:17.418360 sshd[6992]: Accepted publickey for core from 20.229.252.112 port 48296 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:49:17.419924 sshd[6992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:49:17.426167 systemd-logind[1683]: New session 27 of user core. Apr 17 23:49:17.432884 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 17 23:49:17.590276 sshd[6992]: pam_unix(sshd:session): session closed for user core Apr 17 23:49:17.594540 systemd-logind[1683]: Session 27 logged out. Waiting for processes to exit. Apr 17 23:49:17.595352 systemd[1]: sshd@24-10.0.0.19:22-20.229.252.112:48296.service: Deactivated successfully. Apr 17 23:49:17.598398 systemd[1]: session-27.scope: Deactivated successfully. Apr 17 23:49:17.599837 systemd-logind[1683]: Removed session 27. Apr 17 23:49:20.417286 systemd[1]: run-containerd-runc-k8s.io-22d25d5ce8d4dcecfc0b0fffe99df5492443d45c43d1396f5b9f8c6ccc8861d1-runc.qOi0d4.mount: Deactivated successfully. Apr 17 23:49:22.620974 systemd[1]: Started sshd@25-10.0.0.19:22-20.229.252.112:48306.service - OpenSSH per-connection server daemon (20.229.252.112:48306). Apr 17 23:49:22.742741 sshd[7026]: Accepted publickey for core from 20.229.252.112 port 48306 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:49:22.743633 sshd[7026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:49:22.748683 systemd-logind[1683]: New session 28 of user core. Apr 17 23:49:22.755953 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 17 23:49:22.914083 sshd[7026]: pam_unix(sshd:session): session closed for user core Apr 17 23:49:22.917898 systemd[1]: sshd@25-10.0.0.19:22-20.229.252.112:48306.service: Deactivated successfully. Apr 17 23:49:22.920461 systemd[1]: session-28.scope: Deactivated successfully. Apr 17 23:49:22.922200 systemd-logind[1683]: Session 28 logged out. Waiting for processes to exit. Apr 17 23:49:22.923634 systemd-logind[1683]: Removed session 28. Apr 17 23:49:27.948281 systemd[1]: Started sshd@26-10.0.0.19:22-20.229.252.112:38458.service - OpenSSH per-connection server daemon (20.229.252.112:38458). Apr 17 23:49:28.064563 sshd[7060]: Accepted publickey for core from 20.229.252.112 port 38458 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:49:28.065233 sshd[7060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:49:28.072540 systemd-logind[1683]: New session 29 of user core. Apr 17 23:49:28.079245 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 17 23:49:28.237594 sshd[7060]: pam_unix(sshd:session): session closed for user core Apr 17 23:49:28.240424 systemd[1]: sshd@26-10.0.0.19:22-20.229.252.112:38458.service: Deactivated successfully. Apr 17 23:49:28.243128 systemd[1]: session-29.scope: Deactivated successfully. Apr 17 23:49:28.244700 systemd-logind[1683]: Session 29 logged out. Waiting for processes to exit. Apr 17 23:49:28.246439 systemd-logind[1683]: Removed session 29. Apr 17 23:49:33.273011 systemd[1]: Started sshd@27-10.0.0.19:22-20.229.252.112:38474.service - OpenSSH per-connection server daemon (20.229.252.112:38474). Apr 17 23:49:33.389316 sshd[7072]: Accepted publickey for core from 20.229.252.112 port 38474 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:49:33.390929 sshd[7072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:49:33.395723 systemd-logind[1683]: New session 30 of user core. Apr 17 23:49:33.400882 systemd[1]: Started session-30.scope - Session 30 of User core. Apr 17 23:49:33.553101 sshd[7072]: pam_unix(sshd:session): session closed for user core Apr 17 23:49:33.556776 systemd[1]: sshd@27-10.0.0.19:22-20.229.252.112:38474.service: Deactivated successfully. Apr 17 23:49:33.559210 systemd[1]: session-30.scope: Deactivated successfully. Apr 17 23:49:33.560119 systemd-logind[1683]: Session 30 logged out. Waiting for processes to exit. Apr 17 23:49:33.561494 systemd-logind[1683]: Removed session 30. Apr 17 23:49:38.585043 systemd[1]: Started sshd@28-10.0.0.19:22-20.229.252.112:36628.service - OpenSSH per-connection server daemon (20.229.252.112:36628). Apr 17 23:49:38.709582 sshd[7087]: Accepted publickey for core from 20.229.252.112 port 36628 ssh2: RSA SHA256:sMtqA11TjzQIJRJw4PihEx7btYqNmsZRNCIArhmId48 Apr 17 23:49:38.711123 sshd[7087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:49:38.715836 systemd-logind[1683]: New session 31 of user core. Apr 17 23:49:38.719876 systemd[1]: Started session-31.scope - Session 31 of User core. Apr 17 23:49:38.878390 sshd[7087]: pam_unix(sshd:session): session closed for user core Apr 17 23:49:38.882789 systemd[1]: sshd@28-10.0.0.19:22-20.229.252.112:36628.service: Deactivated successfully. Apr 17 23:49:38.885019 systemd[1]: session-31.scope: Deactivated successfully. Apr 17 23:49:38.885934 systemd-logind[1683]: Session 31 logged out. Waiting for processes to exit. Apr 17 23:49:38.887026 systemd-logind[1683]: Removed session 31.