Mar 7 01:18:37.130798 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 6 22:58:19 -00 2026 Mar 7 01:18:37.130834 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:18:37.130851 kernel: BIOS-provided physical RAM map: Mar 7 01:18:37.130862 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 7 01:18:37.130872 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Mar 7 01:18:37.130882 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000000437dfff] usable Mar 7 01:18:37.130895 kernel: BIOS-e820: [mem 0x000000000437e000-0x000000000477dfff] reserved Mar 7 01:18:37.130906 kernel: BIOS-e820: [mem 0x000000000477e000-0x000000003ff1efff] usable Mar 7 01:18:37.130919 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ff73fff] type 20 Mar 7 01:18:37.130930 kernel: BIOS-e820: [mem 0x000000003ff74000-0x000000003ffc8fff] reserved Mar 7 01:18:37.130962 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Mar 7 01:18:37.130973 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Mar 7 01:18:37.130984 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Mar 7 01:18:37.130996 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Mar 7 01:18:37.131013 kernel: printk: bootconsole [earlyser0] enabled Mar 7 01:18:37.131025 kernel: NX (Execute Disable) protection: active Mar 7 01:18:37.131037 kernel: APIC: Static calls initialized Mar 7 01:18:37.131049 kernel: efi: EFI v2.7 by Microsoft Mar 7 01:18:37.131062 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3f421418 Mar 7 01:18:37.131074 kernel: SMBIOS 3.1.0 present. Mar 7 01:18:37.131087 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Mar 7 01:18:37.131099 kernel: Hypervisor detected: Microsoft Hyper-V Mar 7 01:18:37.131111 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Mar 7 01:18:37.131123 kernel: Hyper-V: Host Build 10.0.26102.1212-1-0 Mar 7 01:18:37.131135 kernel: Hyper-V: Nested features: 0x1e0101 Mar 7 01:18:37.131150 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Mar 7 01:18:37.131162 kernel: Hyper-V: Using hypercall for remote TLB flush Mar 7 01:18:37.131175 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Mar 7 01:18:37.131187 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Mar 7 01:18:37.131200 kernel: tsc: Marking TSC unstable due to running on Hyper-V Mar 7 01:18:37.131213 kernel: tsc: Detected 2593.907 MHz processor Mar 7 01:18:37.131226 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 7 01:18:37.131239 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 7 01:18:37.131252 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Mar 7 01:18:37.131267 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 7 01:18:37.131281 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 7 01:18:37.131292 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Mar 7 01:18:37.131303 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Mar 7 01:18:37.131316 kernel: Using GB pages for direct mapping Mar 7 01:18:37.131331 kernel: Secure boot disabled Mar 7 01:18:37.131351 kernel: ACPI: Early table checksum verification disabled Mar 7 01:18:37.131370 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Mar 7 01:18:37.131385 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:18:37.131400 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:18:37.131415 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 7 01:18:37.131431 kernel: ACPI: FACS 0x000000003FFFE000 000040 Mar 7 01:18:37.131446 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:18:37.131461 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:18:37.131480 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:18:37.131495 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:18:37.131510 kernel: ACPI: SRAT 0x000000003FFD4000 0001E0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:18:37.131525 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:18:37.131540 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Mar 7 01:18:37.131553 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Mar 7 01:18:37.131566 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Mar 7 01:18:37.131580 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Mar 7 01:18:37.131593 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Mar 7 01:18:37.131609 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Mar 7 01:18:37.131623 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Mar 7 01:18:37.131636 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd41df] Mar 7 01:18:37.131649 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Mar 7 01:18:37.131661 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 7 01:18:37.131674 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 7 01:18:37.131687 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Mar 7 01:18:37.131700 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Mar 7 01:18:37.131713 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Mar 7 01:18:37.131729 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Mar 7 01:18:37.131742 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Mar 7 01:18:37.131756 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Mar 7 01:18:37.131769 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Mar 7 01:18:37.131782 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Mar 7 01:18:37.131796 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Mar 7 01:18:37.131810 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Mar 7 01:18:37.131823 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Mar 7 01:18:37.131839 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Mar 7 01:18:37.131853 kernel: Zone ranges: Mar 7 01:18:37.131867 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 7 01:18:37.131880 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 7 01:18:37.131894 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Mar 7 01:18:37.131907 kernel: Movable zone start for each node Mar 7 01:18:37.131921 kernel: Early memory node ranges Mar 7 01:18:37.131934 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 7 01:18:37.131959 kernel: node 0: [mem 0x0000000000100000-0x000000000437dfff] Mar 7 01:18:37.131973 kernel: node 0: [mem 0x000000000477e000-0x000000003ff1efff] Mar 7 01:18:37.131985 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Mar 7 01:18:37.131996 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Mar 7 01:18:37.132008 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Mar 7 01:18:37.132020 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 7 01:18:37.132031 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 7 01:18:37.132043 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Mar 7 01:18:37.132056 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Mar 7 01:18:37.132069 kernel: ACPI: PM-Timer IO Port: 0x408 Mar 7 01:18:37.132085 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Mar 7 01:18:37.132099 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Mar 7 01:18:37.132113 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 7 01:18:37.132127 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 7 01:18:37.132140 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Mar 7 01:18:37.132154 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 7 01:18:37.132168 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Mar 7 01:18:37.132182 kernel: Booting paravirtualized kernel on Hyper-V Mar 7 01:18:37.132196 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 7 01:18:37.132212 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 7 01:18:37.132226 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Mar 7 01:18:37.132240 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Mar 7 01:18:37.132253 kernel: pcpu-alloc: [0] 0 1 Mar 7 01:18:37.132266 kernel: Hyper-V: PV spinlocks enabled Mar 7 01:18:37.132279 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 7 01:18:37.132295 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:18:37.132309 kernel: random: crng init done Mar 7 01:18:37.132326 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Mar 7 01:18:37.132340 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 7 01:18:37.132353 kernel: Fallback order for Node 0: 0 Mar 7 01:18:37.132366 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2061321 Mar 7 01:18:37.132378 kernel: Policy zone: Normal Mar 7 01:18:37.132392 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 01:18:37.132406 kernel: software IO TLB: area num 2. Mar 7 01:18:37.132419 kernel: Memory: 8066052K/8383228K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 316916K reserved, 0K cma-reserved) Mar 7 01:18:37.132434 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 7 01:18:37.132461 kernel: ftrace: allocating 37996 entries in 149 pages Mar 7 01:18:37.132475 kernel: ftrace: allocated 149 pages with 4 groups Mar 7 01:18:37.132489 kernel: Dynamic Preempt: voluntary Mar 7 01:18:37.132506 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 01:18:37.132520 kernel: rcu: RCU event tracing is enabled. Mar 7 01:18:37.132535 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 7 01:18:37.132550 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 01:18:37.132564 kernel: Rude variant of Tasks RCU enabled. Mar 7 01:18:37.132579 kernel: Tracing variant of Tasks RCU enabled. Mar 7 01:18:37.132597 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 01:18:37.132611 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 7 01:18:37.132625 kernel: Using NULL legacy PIC Mar 7 01:18:37.132639 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Mar 7 01:18:37.132652 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 01:18:37.132665 kernel: Console: colour dummy device 80x25 Mar 7 01:18:37.132679 kernel: printk: console [tty1] enabled Mar 7 01:18:37.132693 kernel: printk: console [ttyS0] enabled Mar 7 01:18:37.132712 kernel: printk: bootconsole [earlyser0] disabled Mar 7 01:18:37.132726 kernel: ACPI: Core revision 20230628 Mar 7 01:18:37.132741 kernel: Failed to register legacy timer interrupt Mar 7 01:18:37.132754 kernel: APIC: Switch to symmetric I/O mode setup Mar 7 01:18:37.132765 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 7 01:18:37.132777 kernel: Hyper-V: Using IPI hypercalls Mar 7 01:18:37.132790 kernel: APIC: send_IPI() replaced with hv_send_ipi() Mar 7 01:18:37.132804 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Mar 7 01:18:37.132817 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Mar 7 01:18:37.132830 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Mar 7 01:18:37.132839 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Mar 7 01:18:37.132854 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Mar 7 01:18:37.132867 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Mar 7 01:18:37.132882 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Mar 7 01:18:37.132896 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Mar 7 01:18:37.132910 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 7 01:18:37.132925 kernel: Spectre V2 : Mitigation: Retpolines Mar 7 01:18:37.132956 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 7 01:18:37.132971 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Mar 7 01:18:37.132992 kernel: RETBleed: Vulnerable Mar 7 01:18:37.133007 kernel: Speculative Store Bypass: Vulnerable Mar 7 01:18:37.133023 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 01:18:37.133038 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 01:18:37.133054 kernel: active return thunk: its_return_thunk Mar 7 01:18:37.133069 kernel: ITS: Mitigation: Aligned branch/return thunks Mar 7 01:18:37.133084 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 7 01:18:37.133099 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 7 01:18:37.133114 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 7 01:18:37.133130 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Mar 7 01:18:37.133150 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Mar 7 01:18:37.133166 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Mar 7 01:18:37.133181 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 7 01:18:37.133197 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Mar 7 01:18:37.133212 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Mar 7 01:18:37.133228 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Mar 7 01:18:37.133244 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Mar 7 01:18:37.133258 kernel: Freeing SMP alternatives memory: 32K Mar 7 01:18:37.133271 kernel: pid_max: default: 32768 minimum: 301 Mar 7 01:18:37.133286 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 7 01:18:37.133299 kernel: landlock: Up and running. Mar 7 01:18:37.133312 kernel: SELinux: Initializing. Mar 7 01:18:37.133329 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 7 01:18:37.133343 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 7 01:18:37.133353 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Mar 7 01:18:37.133361 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:18:37.133369 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:18:37.133378 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:18:37.133386 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Mar 7 01:18:37.133394 kernel: signal: max sigframe size: 3632 Mar 7 01:18:37.133402 kernel: rcu: Hierarchical SRCU implementation. Mar 7 01:18:37.133414 kernel: rcu: Max phase no-delay instances is 400. Mar 7 01:18:37.133422 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 7 01:18:37.133430 kernel: smp: Bringing up secondary CPUs ... Mar 7 01:18:37.133438 kernel: smpboot: x86: Booting SMP configuration: Mar 7 01:18:37.133446 kernel: .... node #0, CPUs: #1 Mar 7 01:18:37.133454 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Mar 7 01:18:37.133463 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Mar 7 01:18:37.133471 kernel: smp: Brought up 1 node, 2 CPUs Mar 7 01:18:37.133479 kernel: smpboot: Max logical packages: 1 Mar 7 01:18:37.133490 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Mar 7 01:18:37.133498 kernel: devtmpfs: initialized Mar 7 01:18:37.133507 kernel: x86/mm: Memory block size: 128MB Mar 7 01:18:37.133515 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Mar 7 01:18:37.133523 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 01:18:37.133531 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 7 01:18:37.133539 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 01:18:37.133547 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 01:18:37.133555 kernel: audit: initializing netlink subsys (disabled) Mar 7 01:18:37.133565 kernel: audit: type=2000 audit(1772846316.031:1): state=initialized audit_enabled=0 res=1 Mar 7 01:18:37.133574 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 01:18:37.133581 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 7 01:18:37.133590 kernel: cpuidle: using governor menu Mar 7 01:18:37.133598 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 01:18:37.133606 kernel: dca service started, version 1.12.1 Mar 7 01:18:37.133614 kernel: e820: reserve RAM buffer [mem 0x0437e000-0x07ffffff] Mar 7 01:18:37.133622 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Mar 7 01:18:37.133629 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 7 01:18:37.133640 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 01:18:37.133648 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 01:18:37.133656 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 01:18:37.133664 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 01:18:37.133672 kernel: ACPI: Added _OSI(Module Device) Mar 7 01:18:37.133680 kernel: ACPI: Added _OSI(Processor Device) Mar 7 01:18:37.133688 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 01:18:37.133696 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 7 01:18:37.133706 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 7 01:18:37.133715 kernel: ACPI: Interpreter enabled Mar 7 01:18:37.133722 kernel: ACPI: PM: (supports S0 S5) Mar 7 01:18:37.133731 kernel: ACPI: Using IOAPIC for interrupt routing Mar 7 01:18:37.133739 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 7 01:18:37.133747 kernel: PCI: Ignoring E820 reservations for host bridge windows Mar 7 01:18:37.133755 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Mar 7 01:18:37.133763 kernel: iommu: Default domain type: Translated Mar 7 01:18:37.133771 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 7 01:18:37.133779 kernel: efivars: Registered efivars operations Mar 7 01:18:37.133789 kernel: PCI: Using ACPI for IRQ routing Mar 7 01:18:37.133797 kernel: PCI: System does not support PCI Mar 7 01:18:37.133805 kernel: vgaarb: loaded Mar 7 01:18:37.133813 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Mar 7 01:18:37.133821 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 01:18:37.133835 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 01:18:37.133845 kernel: pnp: PnP ACPI init Mar 7 01:18:37.133853 kernel: pnp: PnP ACPI: found 3 devices Mar 7 01:18:37.133861 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 7 01:18:37.133872 kernel: NET: Registered PF_INET protocol family Mar 7 01:18:37.133880 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 7 01:18:37.133889 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Mar 7 01:18:37.133897 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 01:18:37.133906 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 7 01:18:37.133914 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Mar 7 01:18:37.133925 kernel: TCP: Hash tables configured (established 65536 bind 65536) Mar 7 01:18:37.133934 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 7 01:18:37.133961 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 7 01:18:37.133975 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 01:18:37.133983 kernel: NET: Registered PF_XDP protocol family Mar 7 01:18:37.133992 kernel: PCI: CLS 0 bytes, default 64 Mar 7 01:18:37.134000 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 7 01:18:37.134008 kernel: software IO TLB: mapped [mem 0x000000003a878000-0x000000003e878000] (64MB) Mar 7 01:18:37.134016 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 7 01:18:37.134024 kernel: Initialise system trusted keyrings Mar 7 01:18:37.134032 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Mar 7 01:18:37.134043 kernel: Key type asymmetric registered Mar 7 01:18:37.134051 kernel: Asymmetric key parser 'x509' registered Mar 7 01:18:37.134058 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 7 01:18:37.134067 kernel: io scheduler mq-deadline registered Mar 7 01:18:37.134075 kernel: io scheduler kyber registered Mar 7 01:18:37.134083 kernel: io scheduler bfq registered Mar 7 01:18:37.134091 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 7 01:18:37.134099 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 01:18:37.134107 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 7 01:18:37.134115 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Mar 7 01:18:37.134125 kernel: i8042: PNP: No PS/2 controller found. Mar 7 01:18:37.134269 kernel: rtc_cmos 00:02: registered as rtc0 Mar 7 01:18:37.134353 kernel: rtc_cmos 00:02: setting system clock to 2026-03-07T01:18:36 UTC (1772846316) Mar 7 01:18:37.134429 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Mar 7 01:18:37.134439 kernel: intel_pstate: CPU model not supported Mar 7 01:18:37.134448 kernel: efifb: probing for efifb Mar 7 01:18:37.134456 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 7 01:18:37.134467 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 7 01:18:37.134475 kernel: efifb: scrolling: redraw Mar 7 01:18:37.134483 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 7 01:18:37.134491 kernel: Console: switching to colour frame buffer device 128x48 Mar 7 01:18:37.134499 kernel: fb0: EFI VGA frame buffer device Mar 7 01:18:37.134507 kernel: pstore: Using crash dump compression: deflate Mar 7 01:18:37.134515 kernel: pstore: Registered efi_pstore as persistent store backend Mar 7 01:18:37.134523 kernel: NET: Registered PF_INET6 protocol family Mar 7 01:18:37.134532 kernel: Segment Routing with IPv6 Mar 7 01:18:37.134542 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 01:18:37.134550 kernel: NET: Registered PF_PACKET protocol family Mar 7 01:18:37.134558 kernel: Key type dns_resolver registered Mar 7 01:18:37.134566 kernel: IPI shorthand broadcast: enabled Mar 7 01:18:37.134575 kernel: sched_clock: Marking stable (932003300, 55349000)->(1228093300, -240741000) Mar 7 01:18:37.134583 kernel: registered taskstats version 1 Mar 7 01:18:37.134591 kernel: Loading compiled-in X.509 certificates Mar 7 01:18:37.134599 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: da286e6f6c247ee6f65a875c513de7da57782e90' Mar 7 01:18:37.134607 kernel: Key type .fscrypt registered Mar 7 01:18:37.134617 kernel: Key type fscrypt-provisioning registered Mar 7 01:18:37.134625 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 01:18:37.134633 kernel: ima: Allocated hash algorithm: sha1 Mar 7 01:18:37.134641 kernel: ima: No architecture policies found Mar 7 01:18:37.134649 kernel: clk: Disabling unused clocks Mar 7 01:18:37.134657 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 7 01:18:37.134665 kernel: Write protecting the kernel read-only data: 36864k Mar 7 01:18:37.134673 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 7 01:18:37.134681 kernel: Run /init as init process Mar 7 01:18:37.134692 kernel: with arguments: Mar 7 01:18:37.134699 kernel: /init Mar 7 01:18:37.134708 kernel: with environment: Mar 7 01:18:37.134715 kernel: HOME=/ Mar 7 01:18:37.134723 kernel: TERM=linux Mar 7 01:18:37.134734 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:18:37.134744 systemd[1]: Detected virtualization microsoft. Mar 7 01:18:37.134753 systemd[1]: Detected architecture x86-64. Mar 7 01:18:37.134763 systemd[1]: Running in initrd. Mar 7 01:18:37.134772 systemd[1]: No hostname configured, using default hostname. Mar 7 01:18:37.134780 systemd[1]: Hostname set to . Mar 7 01:18:37.134789 systemd[1]: Initializing machine ID from random generator. Mar 7 01:18:37.134797 systemd[1]: Queued start job for default target initrd.target. Mar 7 01:18:37.134806 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:18:37.134814 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:18:37.134823 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 01:18:37.134834 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:18:37.134843 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 01:18:37.134851 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 01:18:37.134861 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 01:18:37.134870 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 01:18:37.134879 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:18:37.134887 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:18:37.134898 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:18:37.134906 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:18:37.134915 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:18:37.134923 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:18:37.134932 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:18:37.140095 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:18:37.140122 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 01:18:37.140137 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 01:18:37.140152 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:18:37.140173 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:18:37.140188 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:18:37.140205 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:18:37.140220 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 01:18:37.140237 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:18:37.140251 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 01:18:37.140264 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 01:18:37.140279 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:18:37.140298 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:18:37.140346 systemd-journald[177]: Collecting audit messages is disabled. Mar 7 01:18:37.140379 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:18:37.140394 systemd-journald[177]: Journal started Mar 7 01:18:37.140430 systemd-journald[177]: Runtime Journal (/run/log/journal/962d85a35e804333be87ab8b195f8e34) is 8.0M, max 158.7M, 150.7M free. Mar 7 01:18:37.152129 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:18:37.152841 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 01:18:37.160614 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:18:37.166082 systemd-modules-load[178]: Inserted module 'overlay' Mar 7 01:18:37.166294 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 01:18:37.173029 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:18:37.194312 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:18:37.204148 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:18:37.223337 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 01:18:37.228642 systemd-modules-load[178]: Inserted module 'br_netfilter' Mar 7 01:18:37.233131 kernel: Bridge firewalling registered Mar 7 01:18:37.232119 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:18:37.236088 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:18:37.244792 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:18:37.250276 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:18:37.263857 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:18:37.275225 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 01:18:37.279095 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:18:37.280463 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:18:37.306731 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:18:37.309097 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:18:37.318108 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:18:37.331708 dracut-cmdline[205]: dracut-dracut-053 Mar 7 01:18:37.337188 dracut-cmdline[205]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:18:37.381218 systemd-resolved[219]: Positive Trust Anchors: Mar 7 01:18:37.381236 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:18:37.381296 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:18:37.412589 systemd-resolved[219]: Defaulting to hostname 'linux'. Mar 7 01:18:37.413869 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:18:37.417905 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:18:37.436961 kernel: SCSI subsystem initialized Mar 7 01:18:37.447963 kernel: Loading iSCSI transport class v2.0-870. Mar 7 01:18:37.459969 kernel: iscsi: registered transport (tcp) Mar 7 01:18:37.481032 kernel: iscsi: registered transport (qla4xxx) Mar 7 01:18:37.481144 kernel: QLogic iSCSI HBA Driver Mar 7 01:18:37.518599 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 01:18:37.527256 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 01:18:37.556143 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 01:18:37.556318 kernel: device-mapper: uevent: version 1.0.3 Mar 7 01:18:37.560961 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 7 01:18:37.600974 kernel: raid6: avx512x4 gen() 18008 MB/s Mar 7 01:18:37.620959 kernel: raid6: avx512x2 gen() 18080 MB/s Mar 7 01:18:37.639951 kernel: raid6: avx512x1 gen() 18165 MB/s Mar 7 01:18:37.658956 kernel: raid6: avx2x4 gen() 18134 MB/s Mar 7 01:18:37.678959 kernel: raid6: avx2x2 gen() 18109 MB/s Mar 7 01:18:37.699431 kernel: raid6: avx2x1 gen() 13756 MB/s Mar 7 01:18:37.699464 kernel: raid6: using algorithm avx512x1 gen() 18165 MB/s Mar 7 01:18:37.721429 kernel: raid6: .... xor() 25744 MB/s, rmw enabled Mar 7 01:18:37.721463 kernel: raid6: using avx512x2 recovery algorithm Mar 7 01:18:37.744976 kernel: xor: automatically using best checksumming function avx Mar 7 01:18:37.892977 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 01:18:37.903425 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:18:37.915138 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:18:37.929849 systemd-udevd[398]: Using default interface naming scheme 'v255'. Mar 7 01:18:37.934523 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:18:37.948128 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 01:18:37.961756 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Mar 7 01:18:37.992290 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:18:38.003235 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:18:38.046257 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:18:38.057166 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 01:18:38.079539 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 01:18:38.090342 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:18:38.098656 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:18:38.106911 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:18:38.120330 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 01:18:38.148536 kernel: cryptd: max_cpu_qlen set to 1000 Mar 7 01:18:38.156987 kernel: hv_vmbus: Vmbus version:5.2 Mar 7 01:18:38.157559 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:18:38.174966 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 7 01:18:38.194418 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Mar 7 01:18:38.194493 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 7 01:18:38.194513 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 7 01:18:38.201961 kernel: hv_vmbus: registering driver hv_storvsc Mar 7 01:18:38.207907 kernel: scsi host1: storvsc_host_t Mar 7 01:18:38.207998 kernel: scsi host0: storvsc_host_t Mar 7 01:18:38.212641 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:18:38.215645 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 7 01:18:38.215570 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:18:38.224849 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Mar 7 01:18:38.226986 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:18:38.230699 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:18:38.235001 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:18:38.236581 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:18:38.258183 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:18:38.268905 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:18:38.278192 kernel: PTP clock support registered Mar 7 01:18:38.274600 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:18:38.287156 kernel: AVX2 version of gcm_enc/dec engaged. Mar 7 01:18:38.287226 kernel: AES CTR mode by8 optimization enabled Mar 7 01:18:38.289250 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:18:38.305960 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 7 01:18:38.319964 kernel: hv_vmbus: registering driver hid_hyperv Mar 7 01:18:38.320019 kernel: hv_vmbus: registering driver hv_netvsc Mar 7 01:18:38.324195 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:18:38.343034 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Mar 7 01:18:38.343061 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 7 01:18:38.343238 kernel: hv_utils: Registering HyperV Utility Driver Mar 7 01:18:38.343251 kernel: hv_vmbus: registering driver hv_utils Mar 7 01:18:38.348154 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:18:38.367831 kernel: hv_utils: Heartbeat IC version 3.0 Mar 7 01:18:38.367888 kernel: hv_utils: Shutdown IC version 3.2 Mar 7 01:18:38.370958 kernel: hv_utils: TimeSync IC version 4.0 Mar 7 01:18:38.873778 systemd-resolved[219]: Clock change detected. Flushing caches. Mar 7 01:18:38.885013 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 7 01:18:38.885317 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 7 01:18:38.893711 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 7 01:18:38.897391 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:18:38.927116 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#147 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 7 01:18:38.933448 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 7 01:18:38.933814 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 7 01:18:38.936110 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 7 01:18:38.940335 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 7 01:18:38.940641 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 7 01:18:38.954083 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:18:38.954167 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 7 01:18:38.970121 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#301 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 7 01:18:39.031340 kernel: hv_netvsc 7ced8d2b-d0de-7ced-8d2b-d0de7ced8d2b eth0: VF slot 1 added Mar 7 01:18:39.038117 kernel: hv_vmbus: registering driver hv_pci Mar 7 01:18:39.190830 kernel: hv_pci 375e7131-619a-4e43-bab3-e106caeea546: PCI VMBus probing: Using version 0x10004 Mar 7 01:18:39.198466 kernel: hv_pci 375e7131-619a-4e43-bab3-e106caeea546: PCI host bridge to bus 619a:00 Mar 7 01:18:39.198783 kernel: pci_bus 619a:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Mar 7 01:18:39.202056 kernel: pci_bus 619a:00: No busn resource found for root bus, will use [bus 00-ff] Mar 7 01:18:39.208511 kernel: pci 619a:00:02.0: [15b3:1016] type 00 class 0x020000 Mar 7 01:18:39.213161 kernel: pci 619a:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Mar 7 01:18:39.218118 kernel: pci 619a:00:02.0: enabling Extended Tags Mar 7 01:18:39.231188 kernel: pci 619a:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 619a:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Mar 7 01:18:39.237468 kernel: pci_bus 619a:00: busn_res: [bus 00-ff] end is updated to 00 Mar 7 01:18:39.237805 kernel: pci 619a:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Mar 7 01:18:39.414117 kernel: mlx5_core 619a:00:02.0: enabling device (0000 -> 0002) Mar 7 01:18:39.419121 kernel: mlx5_core 619a:00:02.0: firmware version: 14.30.5026 Mar 7 01:18:39.631125 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (443) Mar 7 01:18:39.646703 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 7 01:18:39.678864 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 7 01:18:39.694854 kernel: hv_netvsc 7ced8d2b-d0de-7ced-8d2b-d0de7ced8d2b eth0: VF registering: eth1 Mar 7 01:18:39.695217 kernel: mlx5_core 619a:00:02.0 eth1: joined to eth0 Mar 7 01:18:39.701115 kernel: mlx5_core 619a:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Mar 7 01:18:39.714142 kernel: mlx5_core 619a:00:02.0 enP24986s1: renamed from eth1 Mar 7 01:18:39.718490 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 7 01:18:39.817131 kernel: BTRFS: device fsid 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (466) Mar 7 01:18:39.831827 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 7 01:18:39.838098 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 7 01:18:39.857327 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 01:18:39.873119 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:18:39.882142 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:18:40.890123 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:18:40.890753 disk-uuid[609]: The operation has completed successfully. Mar 7 01:18:40.978350 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 01:18:40.978480 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 01:18:41.010256 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 01:18:41.019670 sh[695]: Success Mar 7 01:18:41.075271 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 7 01:18:41.505434 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 01:18:41.512392 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 01:18:41.521348 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 01:18:41.546113 kernel: BTRFS info (device dm-0): first mount of filesystem 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 Mar 7 01:18:41.546192 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:18:41.552762 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 7 01:18:41.556252 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 7 01:18:41.559235 kernel: BTRFS info (device dm-0): using free space tree Mar 7 01:18:42.043631 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 01:18:42.049555 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 01:18:42.060269 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 01:18:42.068300 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 01:18:42.090201 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:18:42.090288 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:18:42.092921 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:18:42.157153 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:18:42.173306 kernel: BTRFS info (device sda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:18:42.172827 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 7 01:18:42.179560 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:18:42.197336 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:18:42.215617 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 01:18:42.222829 systemd-networkd[876]: lo: Link UP Mar 7 01:18:42.222839 systemd-networkd[876]: lo: Gained carrier Mar 7 01:18:42.225617 systemd-networkd[876]: Enumeration completed Mar 7 01:18:42.226677 systemd-networkd[876]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:18:42.226682 systemd-networkd[876]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:18:42.234256 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 01:18:42.248194 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:18:42.251773 systemd[1]: Reached target network.target - Network. Mar 7 01:18:42.322124 kernel: mlx5_core 619a:00:02.0 enP24986s1: Link up Mar 7 01:18:42.363176 kernel: hv_netvsc 7ced8d2b-d0de-7ced-8d2b-d0de7ced8d2b eth0: Data path switched to VF: enP24986s1 Mar 7 01:18:42.363404 systemd-networkd[876]: enP24986s1: Link UP Mar 7 01:18:42.363535 systemd-networkd[876]: eth0: Link UP Mar 7 01:18:42.363661 systemd-networkd[876]: eth0: Gained carrier Mar 7 01:18:42.363672 systemd-networkd[876]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:18:42.379369 systemd-networkd[876]: enP24986s1: Gained carrier Mar 7 01:18:42.418160 systemd-networkd[876]: eth0: DHCPv4 address 10.200.8.18/24, gateway 10.200.8.1 acquired from 168.63.129.16 Mar 7 01:18:43.704468 ignition[880]: Ignition 2.19.0 Mar 7 01:18:43.704482 ignition[880]: Stage: fetch-offline Mar 7 01:18:43.704534 ignition[880]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:18:43.704545 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:18:43.704736 ignition[880]: parsed url from cmdline: "" Mar 7 01:18:43.704743 ignition[880]: no config URL provided Mar 7 01:18:43.704751 ignition[880]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:18:43.704764 ignition[880]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:18:43.704772 ignition[880]: failed to fetch config: resource requires networking Mar 7 01:18:43.719799 ignition[880]: Ignition finished successfully Mar 7 01:18:43.742247 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:18:43.752404 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 7 01:18:43.769107 ignition[887]: Ignition 2.19.0 Mar 7 01:18:43.770622 ignition[887]: Stage: fetch Mar 7 01:18:43.770823 ignition[887]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:18:43.770835 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:18:43.770946 ignition[887]: parsed url from cmdline: "" Mar 7 01:18:43.770949 ignition[887]: no config URL provided Mar 7 01:18:43.770954 ignition[887]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:18:43.770962 ignition[887]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:18:43.770986 ignition[887]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 7 01:18:43.860723 ignition[887]: GET result: OK Mar 7 01:18:43.860817 ignition[887]: config has been read from IMDS userdata Mar 7 01:18:43.860847 ignition[887]: parsing config with SHA512: a28bee088d4d767b0085a2dd7c99c8bec51f75c3c7a8edea46c7f9a4acfd435409c9082cb04141d416b3c22df9edc2612d361c81f8c6d866abf4b777cd2b725e Mar 7 01:18:43.870426 unknown[887]: fetched base config from "system" Mar 7 01:18:43.870786 ignition[887]: fetch: fetch complete Mar 7 01:18:43.870442 unknown[887]: fetched base config from "system" Mar 7 01:18:43.870791 ignition[887]: fetch: fetch passed Mar 7 01:18:43.870450 unknown[887]: fetched user config from "azure" Mar 7 01:18:43.870832 ignition[887]: Ignition finished successfully Mar 7 01:18:43.873926 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 7 01:18:43.885341 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 01:18:43.907651 ignition[894]: Ignition 2.19.0 Mar 7 01:18:43.907665 ignition[894]: Stage: kargs Mar 7 01:18:43.907911 ignition[894]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:18:43.907924 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:18:43.908879 ignition[894]: kargs: kargs passed Mar 7 01:18:43.914655 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 01:18:43.908931 ignition[894]: Ignition finished successfully Mar 7 01:18:43.928303 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 01:18:43.945896 ignition[900]: Ignition 2.19.0 Mar 7 01:18:43.945910 ignition[900]: Stage: disks Mar 7 01:18:43.948436 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 01:18:43.946162 ignition[900]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:18:43.953395 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 01:18:43.946178 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:18:43.961528 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 01:18:43.947139 ignition[900]: disks: disks passed Mar 7 01:18:43.947192 ignition[900]: Ignition finished successfully Mar 7 01:18:43.978821 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:18:43.979985 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:18:43.980547 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:18:44.002408 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 01:18:44.072350 systemd-fsck[908]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 7 01:18:44.077646 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 01:18:44.092644 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 01:18:44.190121 kernel: EXT4-fs (sda9): mounted filesystem aab0506b-de72-4dd2-9393-24d7958f49a5 r/w with ordered data mode. Quota mode: none. Mar 7 01:18:44.191206 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 01:18:44.194451 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 01:18:44.232309 systemd-networkd[876]: eth0: Gained IPv6LL Mar 7 01:18:44.241213 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:18:44.262574 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (919) Mar 7 01:18:44.262650 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:18:44.264110 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:18:44.269485 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:18:44.277121 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:18:44.297207 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 01:18:44.303811 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 7 01:18:44.311946 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 01:18:44.311993 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:18:44.327142 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:18:44.332642 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 01:18:44.344272 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 01:18:45.361314 coreos-metadata[936]: Mar 07 01:18:45.361 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 7 01:18:45.368116 coreos-metadata[936]: Mar 07 01:18:45.368 INFO Fetch successful Mar 7 01:18:45.368116 coreos-metadata[936]: Mar 07 01:18:45.368 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 7 01:18:45.378359 coreos-metadata[936]: Mar 07 01:18:45.378 INFO Fetch successful Mar 7 01:18:45.384197 coreos-metadata[936]: Mar 07 01:18:45.384 INFO wrote hostname ci-4081.3.6-n-8271a56a8b to /sysroot/etc/hostname Mar 7 01:18:45.391270 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 7 01:18:45.453251 initrd-setup-root[948]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 01:18:45.493313 initrd-setup-root[955]: cut: /sysroot/etc/group: No such file or directory Mar 7 01:18:45.525872 initrd-setup-root[962]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 01:18:45.534042 initrd-setup-root[969]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 01:18:46.583052 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 01:18:46.597197 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 01:18:46.606277 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 01:18:46.616762 kernel: BTRFS info (device sda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:18:46.610208 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 01:18:46.651862 ignition[1036]: INFO : Ignition 2.19.0 Mar 7 01:18:46.651862 ignition[1036]: INFO : Stage: mount Mar 7 01:18:46.657801 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:18:46.657801 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:18:46.657801 ignition[1036]: INFO : mount: mount passed Mar 7 01:18:46.657801 ignition[1036]: INFO : Ignition finished successfully Mar 7 01:18:46.659055 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 01:18:46.678360 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 01:18:46.686881 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 01:18:46.700322 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:18:46.722120 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1048) Mar 7 01:18:46.729663 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:18:46.729748 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:18:46.732624 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:18:46.740115 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:18:46.742158 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:18:46.774107 ignition[1065]: INFO : Ignition 2.19.0 Mar 7 01:18:46.774107 ignition[1065]: INFO : Stage: files Mar 7 01:18:46.782237 ignition[1065]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:18:46.782237 ignition[1065]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:18:46.782237 ignition[1065]: DEBUG : files: compiled without relabeling support, skipping Mar 7 01:18:46.782237 ignition[1065]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 01:18:46.782237 ignition[1065]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 01:18:46.992603 ignition[1065]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 01:18:46.997053 ignition[1065]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 01:18:46.997053 ignition[1065]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 01:18:46.993084 unknown[1065]: wrote ssh authorized keys file for user: core Mar 7 01:18:47.022073 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:18:47.029176 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 7 01:18:47.072972 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 7 01:18:47.124871 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:18:47.124871 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 7 01:18:47.143499 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 01:18:47.143499 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:18:47.143499 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:18:47.143499 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:18:47.143499 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:18:47.143499 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:18:47.143499 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:18:47.143499 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:18:47.143499 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:18:47.143499 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 7 01:18:47.143499 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 7 01:18:47.143499 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 7 01:18:47.143499 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Mar 7 01:18:47.738886 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 7 01:18:49.782740 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 7 01:18:49.782740 ignition[1065]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 7 01:18:49.798251 ignition[1065]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:18:49.798251 ignition[1065]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:18:49.798251 ignition[1065]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 7 01:18:49.798251 ignition[1065]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 7 01:18:49.798251 ignition[1065]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 01:18:49.798251 ignition[1065]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:18:49.798251 ignition[1065]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:18:49.798251 ignition[1065]: INFO : files: files passed Mar 7 01:18:49.798251 ignition[1065]: INFO : Ignition finished successfully Mar 7 01:18:49.789215 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 01:18:49.814412 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 01:18:49.846824 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 01:18:49.860354 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 01:18:49.869661 initrd-setup-root-after-ignition[1092]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:18:49.869661 initrd-setup-root-after-ignition[1092]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:18:49.860523 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 01:18:49.892597 initrd-setup-root-after-ignition[1097]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:18:49.871405 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:18:49.878401 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 01:18:49.904386 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 01:18:49.935852 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 01:18:49.935984 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 01:18:49.946252 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 01:18:49.947434 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 01:18:49.948040 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 01:18:49.963409 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 01:18:49.977932 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:18:49.995379 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 01:18:50.011434 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:18:50.018694 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:18:50.025843 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 01:18:50.028891 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 01:18:50.029068 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:18:50.039084 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 01:18:50.048038 systemd[1]: Stopped target basic.target - Basic System. Mar 7 01:18:50.051272 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 01:18:50.056907 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:18:50.068204 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 01:18:50.069655 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 01:18:50.070224 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:18:50.070801 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 01:18:50.071363 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 01:18:50.071876 systemd[1]: Stopped target swap.target - Swaps. Mar 7 01:18:50.072384 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 01:18:50.072556 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:18:50.074033 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:18:50.075198 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:18:50.075674 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 01:18:50.099218 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:18:50.108760 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 01:18:50.110793 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 01:18:50.122705 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 01:18:50.123404 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:18:50.130549 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 01:18:50.130746 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 01:18:50.136193 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 7 01:18:50.198617 ignition[1117]: INFO : Ignition 2.19.0 Mar 7 01:18:50.198617 ignition[1117]: INFO : Stage: umount Mar 7 01:18:50.198617 ignition[1117]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:18:50.198617 ignition[1117]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:18:50.198617 ignition[1117]: INFO : umount: umount passed Mar 7 01:18:50.198617 ignition[1117]: INFO : Ignition finished successfully Mar 7 01:18:50.136385 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 7 01:18:50.162548 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 01:18:50.188443 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 01:18:50.191325 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 01:18:50.191541 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:18:50.198860 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 01:18:50.199003 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:18:50.207865 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 01:18:50.207984 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 01:18:50.224381 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 01:18:50.224485 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 01:18:50.229698 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 01:18:50.230797 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 01:18:50.230929 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 01:18:50.234353 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 01:18:50.234414 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 01:18:50.237395 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 7 01:18:50.237451 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 7 01:18:50.240575 systemd[1]: Stopped target network.target - Network. Mar 7 01:18:50.243197 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 01:18:50.243260 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:18:50.248497 systemd[1]: Stopped target paths.target - Path Units. Mar 7 01:18:50.256635 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 01:18:50.259082 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:18:50.260187 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 01:18:50.260734 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 01:18:50.261410 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 01:18:50.261462 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:18:50.261973 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 01:18:50.262014 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:18:50.263042 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 01:18:50.263100 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 01:18:50.263516 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 01:18:50.263556 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 01:18:50.264248 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 01:18:50.264470 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 01:18:50.438998 kernel: hv_netvsc 7ced8d2b-d0de-7ced-8d2b-d0de7ced8d2b eth0: Data path switched from VF: enP24986s1 Mar 7 01:18:50.300291 systemd-networkd[876]: eth0: DHCPv6 lease lost Mar 7 01:18:50.304166 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 01:18:50.304825 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 01:18:50.309879 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 01:18:50.309923 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:18:50.326926 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 01:18:50.332050 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 01:18:50.332134 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:18:50.335786 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:18:50.337042 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 01:18:50.337355 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 01:18:50.377368 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 01:18:50.377462 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:18:50.380022 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 01:18:50.380078 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 01:18:50.399656 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 01:18:50.399729 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:18:50.406580 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 01:18:50.406736 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:18:50.413082 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 01:18:50.413427 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 01:18:50.418584 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 01:18:50.418630 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:18:50.424476 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 01:18:50.424541 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:18:50.437825 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 01:18:50.437888 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 01:18:50.443178 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:18:50.443254 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:18:50.466370 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 01:18:50.469603 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 01:18:50.469701 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:18:50.477140 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 7 01:18:50.477220 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:18:50.555075 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 01:18:50.555610 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:18:50.563414 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:18:50.563484 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:18:50.575270 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 01:18:50.575407 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 01:18:50.580958 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 01:18:50.581067 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 01:18:50.692280 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 01:18:50.692390 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 01:18:50.694251 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 01:18:50.694542 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 01:18:50.694595 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 01:18:50.706385 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 01:18:50.797042 systemd[1]: Switching root. Mar 7 01:18:50.840936 systemd-journald[177]: Journal stopped Mar 7 01:18:37.130798 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 6 22:58:19 -00 2026 Mar 7 01:18:37.130834 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:18:37.130851 kernel: BIOS-provided physical RAM map: Mar 7 01:18:37.130862 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 7 01:18:37.130872 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Mar 7 01:18:37.130882 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000000437dfff] usable Mar 7 01:18:37.130895 kernel: BIOS-e820: [mem 0x000000000437e000-0x000000000477dfff] reserved Mar 7 01:18:37.130906 kernel: BIOS-e820: [mem 0x000000000477e000-0x000000003ff1efff] usable Mar 7 01:18:37.130919 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ff73fff] type 20 Mar 7 01:18:37.130930 kernel: BIOS-e820: [mem 0x000000003ff74000-0x000000003ffc8fff] reserved Mar 7 01:18:37.130962 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Mar 7 01:18:37.130973 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Mar 7 01:18:37.130984 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Mar 7 01:18:37.130996 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Mar 7 01:18:37.131013 kernel: printk: bootconsole [earlyser0] enabled Mar 7 01:18:37.131025 kernel: NX (Execute Disable) protection: active Mar 7 01:18:37.131037 kernel: APIC: Static calls initialized Mar 7 01:18:37.131049 kernel: efi: EFI v2.7 by Microsoft Mar 7 01:18:37.131062 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3f421418 Mar 7 01:18:37.131074 kernel: SMBIOS 3.1.0 present. Mar 7 01:18:37.131087 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Mar 7 01:18:37.131099 kernel: Hypervisor detected: Microsoft Hyper-V Mar 7 01:18:37.131111 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Mar 7 01:18:37.131123 kernel: Hyper-V: Host Build 10.0.26102.1212-1-0 Mar 7 01:18:37.131135 kernel: Hyper-V: Nested features: 0x1e0101 Mar 7 01:18:37.131150 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Mar 7 01:18:37.131162 kernel: Hyper-V: Using hypercall for remote TLB flush Mar 7 01:18:37.131175 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Mar 7 01:18:37.131187 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Mar 7 01:18:37.131200 kernel: tsc: Marking TSC unstable due to running on Hyper-V Mar 7 01:18:37.131213 kernel: tsc: Detected 2593.907 MHz processor Mar 7 01:18:37.131226 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 7 01:18:37.131239 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 7 01:18:37.131252 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Mar 7 01:18:37.131267 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 7 01:18:37.131281 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 7 01:18:37.131292 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Mar 7 01:18:37.131303 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Mar 7 01:18:37.131316 kernel: Using GB pages for direct mapping Mar 7 01:18:37.131331 kernel: Secure boot disabled Mar 7 01:18:37.131351 kernel: ACPI: Early table checksum verification disabled Mar 7 01:18:37.131370 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Mar 7 01:18:37.131385 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:18:37.131400 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:18:37.131415 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 7 01:18:37.131431 kernel: ACPI: FACS 0x000000003FFFE000 000040 Mar 7 01:18:37.131446 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:18:37.131461 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:18:37.131480 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:18:37.131495 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:18:37.131510 kernel: ACPI: SRAT 0x000000003FFD4000 0001E0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:18:37.131525 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:18:37.131540 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Mar 7 01:18:37.131553 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Mar 7 01:18:37.131566 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Mar 7 01:18:37.131580 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Mar 7 01:18:37.131593 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Mar 7 01:18:37.131609 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Mar 7 01:18:37.131623 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Mar 7 01:18:37.131636 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd41df] Mar 7 01:18:37.131649 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Mar 7 01:18:37.131661 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 7 01:18:37.131674 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 7 01:18:37.131687 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Mar 7 01:18:37.131700 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Mar 7 01:18:37.131713 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Mar 7 01:18:37.131729 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Mar 7 01:18:37.131742 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Mar 7 01:18:37.131756 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Mar 7 01:18:37.131769 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Mar 7 01:18:37.131782 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Mar 7 01:18:37.131796 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Mar 7 01:18:37.131810 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Mar 7 01:18:37.131823 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Mar 7 01:18:37.131839 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Mar 7 01:18:37.131853 kernel: Zone ranges: Mar 7 01:18:37.131867 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 7 01:18:37.131880 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 7 01:18:37.131894 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Mar 7 01:18:37.131907 kernel: Movable zone start for each node Mar 7 01:18:37.131921 kernel: Early memory node ranges Mar 7 01:18:37.131934 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 7 01:18:37.131959 kernel: node 0: [mem 0x0000000000100000-0x000000000437dfff] Mar 7 01:18:37.131973 kernel: node 0: [mem 0x000000000477e000-0x000000003ff1efff] Mar 7 01:18:37.131985 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Mar 7 01:18:37.131996 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Mar 7 01:18:37.132008 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Mar 7 01:18:37.132020 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 7 01:18:37.132031 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 7 01:18:37.132043 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Mar 7 01:18:37.132056 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Mar 7 01:18:37.132069 kernel: ACPI: PM-Timer IO Port: 0x408 Mar 7 01:18:37.132085 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Mar 7 01:18:37.132099 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Mar 7 01:18:37.132113 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 7 01:18:37.132127 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 7 01:18:37.132140 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Mar 7 01:18:37.132154 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 7 01:18:37.132168 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Mar 7 01:18:37.132182 kernel: Booting paravirtualized kernel on Hyper-V Mar 7 01:18:37.132196 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 7 01:18:37.132212 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 7 01:18:37.132226 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Mar 7 01:18:37.132240 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Mar 7 01:18:37.132253 kernel: pcpu-alloc: [0] 0 1 Mar 7 01:18:37.132266 kernel: Hyper-V: PV spinlocks enabled Mar 7 01:18:37.132279 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 7 01:18:37.132295 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:18:37.132309 kernel: random: crng init done Mar 7 01:18:37.132326 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Mar 7 01:18:37.132340 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 7 01:18:37.132353 kernel: Fallback order for Node 0: 0 Mar 7 01:18:37.132366 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2061321 Mar 7 01:18:37.132378 kernel: Policy zone: Normal Mar 7 01:18:37.132392 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 01:18:37.132406 kernel: software IO TLB: area num 2. Mar 7 01:18:37.132419 kernel: Memory: 8066052K/8383228K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 316916K reserved, 0K cma-reserved) Mar 7 01:18:37.132434 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 7 01:18:37.132461 kernel: ftrace: allocating 37996 entries in 149 pages Mar 7 01:18:37.132475 kernel: ftrace: allocated 149 pages with 4 groups Mar 7 01:18:37.132489 kernel: Dynamic Preempt: voluntary Mar 7 01:18:37.132506 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 01:18:37.132520 kernel: rcu: RCU event tracing is enabled. Mar 7 01:18:37.132535 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 7 01:18:37.132550 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 01:18:37.132564 kernel: Rude variant of Tasks RCU enabled. Mar 7 01:18:37.132579 kernel: Tracing variant of Tasks RCU enabled. Mar 7 01:18:37.132597 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 01:18:37.132611 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 7 01:18:37.132625 kernel: Using NULL legacy PIC Mar 7 01:18:37.132639 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Mar 7 01:18:37.132652 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 01:18:37.132665 kernel: Console: colour dummy device 80x25 Mar 7 01:18:37.132679 kernel: printk: console [tty1] enabled Mar 7 01:18:37.132693 kernel: printk: console [ttyS0] enabled Mar 7 01:18:37.132712 kernel: printk: bootconsole [earlyser0] disabled Mar 7 01:18:37.132726 kernel: ACPI: Core revision 20230628 Mar 7 01:18:37.132741 kernel: Failed to register legacy timer interrupt Mar 7 01:18:37.132754 kernel: APIC: Switch to symmetric I/O mode setup Mar 7 01:18:37.132765 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 7 01:18:37.132777 kernel: Hyper-V: Using IPI hypercalls Mar 7 01:18:37.132790 kernel: APIC: send_IPI() replaced with hv_send_ipi() Mar 7 01:18:37.132804 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Mar 7 01:18:37.132817 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Mar 7 01:18:37.132830 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Mar 7 01:18:37.132839 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Mar 7 01:18:37.132854 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Mar 7 01:18:37.132867 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Mar 7 01:18:37.132882 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Mar 7 01:18:37.132896 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Mar 7 01:18:37.132910 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 7 01:18:37.132925 kernel: Spectre V2 : Mitigation: Retpolines Mar 7 01:18:37.132956 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 7 01:18:37.132971 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Mar 7 01:18:37.132992 kernel: RETBleed: Vulnerable Mar 7 01:18:37.133007 kernel: Speculative Store Bypass: Vulnerable Mar 7 01:18:37.133023 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 01:18:37.133038 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 01:18:37.133054 kernel: active return thunk: its_return_thunk Mar 7 01:18:37.133069 kernel: ITS: Mitigation: Aligned branch/return thunks Mar 7 01:18:37.133084 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 7 01:18:37.133099 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 7 01:18:37.133114 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 7 01:18:37.133130 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Mar 7 01:18:37.133150 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Mar 7 01:18:37.133166 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Mar 7 01:18:37.133181 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 7 01:18:37.133197 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Mar 7 01:18:37.133212 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Mar 7 01:18:37.133228 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Mar 7 01:18:37.133244 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Mar 7 01:18:37.133258 kernel: Freeing SMP alternatives memory: 32K Mar 7 01:18:37.133271 kernel: pid_max: default: 32768 minimum: 301 Mar 7 01:18:37.133286 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 7 01:18:37.133299 kernel: landlock: Up and running. Mar 7 01:18:37.133312 kernel: SELinux: Initializing. Mar 7 01:18:37.133329 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 7 01:18:37.133343 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 7 01:18:37.133353 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Mar 7 01:18:37.133361 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:18:37.133369 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:18:37.133378 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:18:37.133386 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Mar 7 01:18:37.133394 kernel: signal: max sigframe size: 3632 Mar 7 01:18:37.133402 kernel: rcu: Hierarchical SRCU implementation. Mar 7 01:18:37.133414 kernel: rcu: Max phase no-delay instances is 400. Mar 7 01:18:37.133422 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 7 01:18:37.133430 kernel: smp: Bringing up secondary CPUs ... Mar 7 01:18:37.133438 kernel: smpboot: x86: Booting SMP configuration: Mar 7 01:18:37.133446 kernel: .... node #0, CPUs: #1 Mar 7 01:18:37.133454 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Mar 7 01:18:37.133463 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Mar 7 01:18:37.133471 kernel: smp: Brought up 1 node, 2 CPUs Mar 7 01:18:37.133479 kernel: smpboot: Max logical packages: 1 Mar 7 01:18:37.133490 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Mar 7 01:18:37.133498 kernel: devtmpfs: initialized Mar 7 01:18:37.133507 kernel: x86/mm: Memory block size: 128MB Mar 7 01:18:37.133515 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Mar 7 01:18:37.133523 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 01:18:37.133531 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 7 01:18:37.133539 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 01:18:37.133547 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 01:18:37.133555 kernel: audit: initializing netlink subsys (disabled) Mar 7 01:18:37.133565 kernel: audit: type=2000 audit(1772846316.031:1): state=initialized audit_enabled=0 res=1 Mar 7 01:18:37.133574 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 01:18:37.133581 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 7 01:18:37.133590 kernel: cpuidle: using governor menu Mar 7 01:18:37.133598 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 01:18:37.133606 kernel: dca service started, version 1.12.1 Mar 7 01:18:37.133614 kernel: e820: reserve RAM buffer [mem 0x0437e000-0x07ffffff] Mar 7 01:18:37.133622 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Mar 7 01:18:37.133629 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 7 01:18:37.133640 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 01:18:37.133648 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 01:18:37.133656 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 01:18:37.133664 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 01:18:37.133672 kernel: ACPI: Added _OSI(Module Device) Mar 7 01:18:37.133680 kernel: ACPI: Added _OSI(Processor Device) Mar 7 01:18:37.133688 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 01:18:37.133696 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 7 01:18:37.133706 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 7 01:18:37.133715 kernel: ACPI: Interpreter enabled Mar 7 01:18:37.133722 kernel: ACPI: PM: (supports S0 S5) Mar 7 01:18:37.133731 kernel: ACPI: Using IOAPIC for interrupt routing Mar 7 01:18:37.133739 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 7 01:18:37.133747 kernel: PCI: Ignoring E820 reservations for host bridge windows Mar 7 01:18:37.133755 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Mar 7 01:18:37.133763 kernel: iommu: Default domain type: Translated Mar 7 01:18:37.133771 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 7 01:18:37.133779 kernel: efivars: Registered efivars operations Mar 7 01:18:37.133789 kernel: PCI: Using ACPI for IRQ routing Mar 7 01:18:37.133797 kernel: PCI: System does not support PCI Mar 7 01:18:37.133805 kernel: vgaarb: loaded Mar 7 01:18:37.133813 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Mar 7 01:18:37.133821 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 01:18:37.133835 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 01:18:37.133845 kernel: pnp: PnP ACPI init Mar 7 01:18:37.133853 kernel: pnp: PnP ACPI: found 3 devices Mar 7 01:18:37.133861 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 7 01:18:37.133872 kernel: NET: Registered PF_INET protocol family Mar 7 01:18:37.133880 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 7 01:18:37.133889 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Mar 7 01:18:37.133897 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 01:18:37.133906 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 7 01:18:37.133914 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Mar 7 01:18:37.133925 kernel: TCP: Hash tables configured (established 65536 bind 65536) Mar 7 01:18:37.133934 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 7 01:18:37.133961 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 7 01:18:37.133975 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 01:18:37.133983 kernel: NET: Registered PF_XDP protocol family Mar 7 01:18:37.133992 kernel: PCI: CLS 0 bytes, default 64 Mar 7 01:18:37.134000 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 7 01:18:37.134008 kernel: software IO TLB: mapped [mem 0x000000003a878000-0x000000003e878000] (64MB) Mar 7 01:18:37.134016 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 7 01:18:37.134024 kernel: Initialise system trusted keyrings Mar 7 01:18:37.134032 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Mar 7 01:18:37.134043 kernel: Key type asymmetric registered Mar 7 01:18:37.134051 kernel: Asymmetric key parser 'x509' registered Mar 7 01:18:37.134058 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 7 01:18:37.134067 kernel: io scheduler mq-deadline registered Mar 7 01:18:37.134075 kernel: io scheduler kyber registered Mar 7 01:18:37.134083 kernel: io scheduler bfq registered Mar 7 01:18:37.134091 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 7 01:18:37.134099 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 01:18:37.134107 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 7 01:18:37.134115 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Mar 7 01:18:37.134125 kernel: i8042: PNP: No PS/2 controller found. Mar 7 01:18:37.134269 kernel: rtc_cmos 00:02: registered as rtc0 Mar 7 01:18:37.134353 kernel: rtc_cmos 00:02: setting system clock to 2026-03-07T01:18:36 UTC (1772846316) Mar 7 01:18:37.134429 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Mar 7 01:18:37.134439 kernel: intel_pstate: CPU model not supported Mar 7 01:18:37.134448 kernel: efifb: probing for efifb Mar 7 01:18:37.134456 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 7 01:18:37.134467 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 7 01:18:37.134475 kernel: efifb: scrolling: redraw Mar 7 01:18:37.134483 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 7 01:18:37.134491 kernel: Console: switching to colour frame buffer device 128x48 Mar 7 01:18:37.134499 kernel: fb0: EFI VGA frame buffer device Mar 7 01:18:37.134507 kernel: pstore: Using crash dump compression: deflate Mar 7 01:18:37.134515 kernel: pstore: Registered efi_pstore as persistent store backend Mar 7 01:18:37.134523 kernel: NET: Registered PF_INET6 protocol family Mar 7 01:18:37.134532 kernel: Segment Routing with IPv6 Mar 7 01:18:37.134542 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 01:18:37.134550 kernel: NET: Registered PF_PACKET protocol family Mar 7 01:18:37.134558 kernel: Key type dns_resolver registered Mar 7 01:18:37.134566 kernel: IPI shorthand broadcast: enabled Mar 7 01:18:37.134575 kernel: sched_clock: Marking stable (932003300, 55349000)->(1228093300, -240741000) Mar 7 01:18:37.134583 kernel: registered taskstats version 1 Mar 7 01:18:37.134591 kernel: Loading compiled-in X.509 certificates Mar 7 01:18:37.134599 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: da286e6f6c247ee6f65a875c513de7da57782e90' Mar 7 01:18:37.134607 kernel: Key type .fscrypt registered Mar 7 01:18:37.134617 kernel: Key type fscrypt-provisioning registered Mar 7 01:18:37.134625 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 01:18:37.134633 kernel: ima: Allocated hash algorithm: sha1 Mar 7 01:18:37.134641 kernel: ima: No architecture policies found Mar 7 01:18:37.134649 kernel: clk: Disabling unused clocks Mar 7 01:18:37.134657 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 7 01:18:37.134665 kernel: Write protecting the kernel read-only data: 36864k Mar 7 01:18:37.134673 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 7 01:18:37.134681 kernel: Run /init as init process Mar 7 01:18:37.134692 kernel: with arguments: Mar 7 01:18:37.134699 kernel: /init Mar 7 01:18:37.134708 kernel: with environment: Mar 7 01:18:37.134715 kernel: HOME=/ Mar 7 01:18:37.134723 kernel: TERM=linux Mar 7 01:18:37.134734 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:18:37.134744 systemd[1]: Detected virtualization microsoft. Mar 7 01:18:37.134753 systemd[1]: Detected architecture x86-64. Mar 7 01:18:37.134763 systemd[1]: Running in initrd. Mar 7 01:18:37.134772 systemd[1]: No hostname configured, using default hostname. Mar 7 01:18:37.134780 systemd[1]: Hostname set to . Mar 7 01:18:37.134789 systemd[1]: Initializing machine ID from random generator. Mar 7 01:18:37.134797 systemd[1]: Queued start job for default target initrd.target. Mar 7 01:18:37.134806 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:18:37.134814 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:18:37.134823 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 01:18:37.134834 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:18:37.134843 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 01:18:37.134851 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 01:18:37.134861 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 01:18:37.134870 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 01:18:37.134879 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:18:37.134887 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:18:37.134898 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:18:37.134906 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:18:37.134915 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:18:37.134923 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:18:37.134932 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:18:37.140095 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:18:37.140122 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 01:18:37.140137 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 01:18:37.140152 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:18:37.140173 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:18:37.140188 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:18:37.140205 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:18:37.140220 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 01:18:37.140237 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:18:37.140251 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 01:18:37.140264 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 01:18:37.140279 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:18:37.140298 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:18:37.140346 systemd-journald[177]: Collecting audit messages is disabled. Mar 7 01:18:37.140379 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:18:37.140394 systemd-journald[177]: Journal started Mar 7 01:18:37.140430 systemd-journald[177]: Runtime Journal (/run/log/journal/962d85a35e804333be87ab8b195f8e34) is 8.0M, max 158.7M, 150.7M free. Mar 7 01:18:37.152129 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:18:37.152841 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 01:18:37.160614 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:18:37.166082 systemd-modules-load[178]: Inserted module 'overlay' Mar 7 01:18:37.166294 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 01:18:37.173029 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:18:37.194312 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:18:37.204148 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:18:37.223337 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 01:18:37.228642 systemd-modules-load[178]: Inserted module 'br_netfilter' Mar 7 01:18:37.233131 kernel: Bridge firewalling registered Mar 7 01:18:37.232119 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:18:37.236088 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:18:37.244792 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:18:37.250276 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:18:37.263857 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:18:37.275225 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 01:18:37.279095 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:18:37.280463 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:18:37.306731 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:18:37.309097 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:18:37.318108 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:18:37.331708 dracut-cmdline[205]: dracut-dracut-053 Mar 7 01:18:37.337188 dracut-cmdline[205]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:18:37.381218 systemd-resolved[219]: Positive Trust Anchors: Mar 7 01:18:37.381236 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:18:37.381296 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:18:37.412589 systemd-resolved[219]: Defaulting to hostname 'linux'. Mar 7 01:18:37.413869 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:18:37.417905 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:18:37.436961 kernel: SCSI subsystem initialized Mar 7 01:18:37.447963 kernel: Loading iSCSI transport class v2.0-870. Mar 7 01:18:37.459969 kernel: iscsi: registered transport (tcp) Mar 7 01:18:37.481032 kernel: iscsi: registered transport (qla4xxx) Mar 7 01:18:37.481144 kernel: QLogic iSCSI HBA Driver Mar 7 01:18:37.518599 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 01:18:37.527256 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 01:18:37.556143 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 01:18:37.556318 kernel: device-mapper: uevent: version 1.0.3 Mar 7 01:18:37.560961 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 7 01:18:37.600974 kernel: raid6: avx512x4 gen() 18008 MB/s Mar 7 01:18:37.620959 kernel: raid6: avx512x2 gen() 18080 MB/s Mar 7 01:18:37.639951 kernel: raid6: avx512x1 gen() 18165 MB/s Mar 7 01:18:37.658956 kernel: raid6: avx2x4 gen() 18134 MB/s Mar 7 01:18:37.678959 kernel: raid6: avx2x2 gen() 18109 MB/s Mar 7 01:18:37.699431 kernel: raid6: avx2x1 gen() 13756 MB/s Mar 7 01:18:37.699464 kernel: raid6: using algorithm avx512x1 gen() 18165 MB/s Mar 7 01:18:37.721429 kernel: raid6: .... xor() 25744 MB/s, rmw enabled Mar 7 01:18:37.721463 kernel: raid6: using avx512x2 recovery algorithm Mar 7 01:18:37.744976 kernel: xor: automatically using best checksumming function avx Mar 7 01:18:37.892977 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 01:18:37.903425 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:18:37.915138 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:18:37.929849 systemd-udevd[398]: Using default interface naming scheme 'v255'. Mar 7 01:18:37.934523 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:18:37.948128 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 01:18:37.961756 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Mar 7 01:18:37.992290 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:18:38.003235 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:18:38.046257 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:18:38.057166 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 01:18:38.079539 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 01:18:38.090342 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:18:38.098656 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:18:38.106911 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:18:38.120330 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 01:18:38.148536 kernel: cryptd: max_cpu_qlen set to 1000 Mar 7 01:18:38.156987 kernel: hv_vmbus: Vmbus version:5.2 Mar 7 01:18:38.157559 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:18:38.174966 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 7 01:18:38.194418 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Mar 7 01:18:38.194493 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 7 01:18:38.194513 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 7 01:18:38.201961 kernel: hv_vmbus: registering driver hv_storvsc Mar 7 01:18:38.207907 kernel: scsi host1: storvsc_host_t Mar 7 01:18:38.207998 kernel: scsi host0: storvsc_host_t Mar 7 01:18:38.212641 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:18:38.215645 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 7 01:18:38.215570 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:18:38.224849 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Mar 7 01:18:38.226986 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:18:38.230699 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:18:38.235001 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:18:38.236581 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:18:38.258183 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:18:38.268905 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:18:38.278192 kernel: PTP clock support registered Mar 7 01:18:38.274600 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:18:38.287156 kernel: AVX2 version of gcm_enc/dec engaged. Mar 7 01:18:38.287226 kernel: AES CTR mode by8 optimization enabled Mar 7 01:18:38.289250 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:18:38.305960 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 7 01:18:38.319964 kernel: hv_vmbus: registering driver hid_hyperv Mar 7 01:18:38.320019 kernel: hv_vmbus: registering driver hv_netvsc Mar 7 01:18:38.324195 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:18:38.343034 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Mar 7 01:18:38.343061 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 7 01:18:38.343238 kernel: hv_utils: Registering HyperV Utility Driver Mar 7 01:18:38.343251 kernel: hv_vmbus: registering driver hv_utils Mar 7 01:18:38.348154 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:18:38.367831 kernel: hv_utils: Heartbeat IC version 3.0 Mar 7 01:18:38.367888 kernel: hv_utils: Shutdown IC version 3.2 Mar 7 01:18:38.370958 kernel: hv_utils: TimeSync IC version 4.0 Mar 7 01:18:38.873778 systemd-resolved[219]: Clock change detected. Flushing caches. Mar 7 01:18:38.885013 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 7 01:18:38.885317 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 7 01:18:38.893711 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 7 01:18:38.897391 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:18:38.927116 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#147 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 7 01:18:38.933448 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 7 01:18:38.933814 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 7 01:18:38.936110 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 7 01:18:38.940335 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 7 01:18:38.940641 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 7 01:18:38.954083 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:18:38.954167 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 7 01:18:38.970121 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#301 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 7 01:18:39.031340 kernel: hv_netvsc 7ced8d2b-d0de-7ced-8d2b-d0de7ced8d2b eth0: VF slot 1 added Mar 7 01:18:39.038117 kernel: hv_vmbus: registering driver hv_pci Mar 7 01:18:39.190830 kernel: hv_pci 375e7131-619a-4e43-bab3-e106caeea546: PCI VMBus probing: Using version 0x10004 Mar 7 01:18:39.198466 kernel: hv_pci 375e7131-619a-4e43-bab3-e106caeea546: PCI host bridge to bus 619a:00 Mar 7 01:18:39.198783 kernel: pci_bus 619a:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Mar 7 01:18:39.202056 kernel: pci_bus 619a:00: No busn resource found for root bus, will use [bus 00-ff] Mar 7 01:18:39.208511 kernel: pci 619a:00:02.0: [15b3:1016] type 00 class 0x020000 Mar 7 01:18:39.213161 kernel: pci 619a:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Mar 7 01:18:39.218118 kernel: pci 619a:00:02.0: enabling Extended Tags Mar 7 01:18:39.231188 kernel: pci 619a:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 619a:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Mar 7 01:18:39.237468 kernel: pci_bus 619a:00: busn_res: [bus 00-ff] end is updated to 00 Mar 7 01:18:39.237805 kernel: pci 619a:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Mar 7 01:18:39.414117 kernel: mlx5_core 619a:00:02.0: enabling device (0000 -> 0002) Mar 7 01:18:39.419121 kernel: mlx5_core 619a:00:02.0: firmware version: 14.30.5026 Mar 7 01:18:39.631125 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (443) Mar 7 01:18:39.646703 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 7 01:18:39.678864 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 7 01:18:39.694854 kernel: hv_netvsc 7ced8d2b-d0de-7ced-8d2b-d0de7ced8d2b eth0: VF registering: eth1 Mar 7 01:18:39.695217 kernel: mlx5_core 619a:00:02.0 eth1: joined to eth0 Mar 7 01:18:39.701115 kernel: mlx5_core 619a:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Mar 7 01:18:39.714142 kernel: mlx5_core 619a:00:02.0 enP24986s1: renamed from eth1 Mar 7 01:18:39.718490 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 7 01:18:39.817131 kernel: BTRFS: device fsid 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (466) Mar 7 01:18:39.831827 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 7 01:18:39.838098 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 7 01:18:39.857327 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 01:18:39.873119 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:18:39.882142 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:18:40.890123 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:18:40.890753 disk-uuid[609]: The operation has completed successfully. Mar 7 01:18:40.978350 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 01:18:40.978480 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 01:18:41.010256 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 01:18:41.019670 sh[695]: Success Mar 7 01:18:41.075271 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 7 01:18:41.505434 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 01:18:41.512392 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 01:18:41.521348 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 01:18:41.546113 kernel: BTRFS info (device dm-0): first mount of filesystem 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 Mar 7 01:18:41.546192 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:18:41.552762 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 7 01:18:41.556252 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 7 01:18:41.559235 kernel: BTRFS info (device dm-0): using free space tree Mar 7 01:18:42.043631 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 01:18:42.049555 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 01:18:42.060269 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 01:18:42.068300 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 01:18:42.090201 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:18:42.090288 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:18:42.092921 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:18:42.157153 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:18:42.173306 kernel: BTRFS info (device sda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:18:42.172827 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 7 01:18:42.179560 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:18:42.197336 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:18:42.215617 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 01:18:42.222829 systemd-networkd[876]: lo: Link UP Mar 7 01:18:42.222839 systemd-networkd[876]: lo: Gained carrier Mar 7 01:18:42.225617 systemd-networkd[876]: Enumeration completed Mar 7 01:18:42.226677 systemd-networkd[876]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:18:42.226682 systemd-networkd[876]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:18:42.234256 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 01:18:42.248194 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:18:42.251773 systemd[1]: Reached target network.target - Network. Mar 7 01:18:42.322124 kernel: mlx5_core 619a:00:02.0 enP24986s1: Link up Mar 7 01:18:42.363176 kernel: hv_netvsc 7ced8d2b-d0de-7ced-8d2b-d0de7ced8d2b eth0: Data path switched to VF: enP24986s1 Mar 7 01:18:42.363404 systemd-networkd[876]: enP24986s1: Link UP Mar 7 01:18:42.363535 systemd-networkd[876]: eth0: Link UP Mar 7 01:18:42.363661 systemd-networkd[876]: eth0: Gained carrier Mar 7 01:18:42.363672 systemd-networkd[876]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:18:42.379369 systemd-networkd[876]: enP24986s1: Gained carrier Mar 7 01:18:42.418160 systemd-networkd[876]: eth0: DHCPv4 address 10.200.8.18/24, gateway 10.200.8.1 acquired from 168.63.129.16 Mar 7 01:18:43.704468 ignition[880]: Ignition 2.19.0 Mar 7 01:18:43.704482 ignition[880]: Stage: fetch-offline Mar 7 01:18:43.704534 ignition[880]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:18:43.704545 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:18:43.704736 ignition[880]: parsed url from cmdline: "" Mar 7 01:18:43.704743 ignition[880]: no config URL provided Mar 7 01:18:43.704751 ignition[880]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:18:43.704764 ignition[880]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:18:43.704772 ignition[880]: failed to fetch config: resource requires networking Mar 7 01:18:43.719799 ignition[880]: Ignition finished successfully Mar 7 01:18:43.742247 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:18:43.752404 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 7 01:18:43.769107 ignition[887]: Ignition 2.19.0 Mar 7 01:18:43.770622 ignition[887]: Stage: fetch Mar 7 01:18:43.770823 ignition[887]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:18:43.770835 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:18:43.770946 ignition[887]: parsed url from cmdline: "" Mar 7 01:18:43.770949 ignition[887]: no config URL provided Mar 7 01:18:43.770954 ignition[887]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:18:43.770962 ignition[887]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:18:43.770986 ignition[887]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 7 01:18:43.860723 ignition[887]: GET result: OK Mar 7 01:18:43.860817 ignition[887]: config has been read from IMDS userdata Mar 7 01:18:43.860847 ignition[887]: parsing config with SHA512: a28bee088d4d767b0085a2dd7c99c8bec51f75c3c7a8edea46c7f9a4acfd435409c9082cb04141d416b3c22df9edc2612d361c81f8c6d866abf4b777cd2b725e Mar 7 01:18:43.870426 unknown[887]: fetched base config from "system" Mar 7 01:18:43.870786 ignition[887]: fetch: fetch complete Mar 7 01:18:43.870442 unknown[887]: fetched base config from "system" Mar 7 01:18:43.870791 ignition[887]: fetch: fetch passed Mar 7 01:18:43.870450 unknown[887]: fetched user config from "azure" Mar 7 01:18:43.870832 ignition[887]: Ignition finished successfully Mar 7 01:18:43.873926 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 7 01:18:43.885341 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 01:18:43.907651 ignition[894]: Ignition 2.19.0 Mar 7 01:18:43.907665 ignition[894]: Stage: kargs Mar 7 01:18:43.907911 ignition[894]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:18:43.907924 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:18:43.908879 ignition[894]: kargs: kargs passed Mar 7 01:18:43.914655 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 01:18:43.908931 ignition[894]: Ignition finished successfully Mar 7 01:18:43.928303 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 01:18:43.945896 ignition[900]: Ignition 2.19.0 Mar 7 01:18:43.945910 ignition[900]: Stage: disks Mar 7 01:18:43.948436 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 01:18:43.946162 ignition[900]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:18:43.953395 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 01:18:43.946178 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:18:43.961528 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 01:18:43.947139 ignition[900]: disks: disks passed Mar 7 01:18:43.947192 ignition[900]: Ignition finished successfully Mar 7 01:18:43.978821 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:18:43.979985 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:18:43.980547 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:18:44.002408 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 01:18:44.072350 systemd-fsck[908]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 7 01:18:44.077646 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 01:18:44.092644 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 01:18:44.190121 kernel: EXT4-fs (sda9): mounted filesystem aab0506b-de72-4dd2-9393-24d7958f49a5 r/w with ordered data mode. Quota mode: none. Mar 7 01:18:44.191206 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 01:18:44.194451 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 01:18:44.232309 systemd-networkd[876]: eth0: Gained IPv6LL Mar 7 01:18:44.241213 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:18:44.262574 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (919) Mar 7 01:18:44.262650 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:18:44.264110 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:18:44.269485 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:18:44.277121 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:18:44.297207 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 01:18:44.303811 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 7 01:18:44.311946 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 01:18:44.311993 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:18:44.327142 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:18:44.332642 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 01:18:44.344272 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 01:18:45.361314 coreos-metadata[936]: Mar 07 01:18:45.361 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 7 01:18:45.368116 coreos-metadata[936]: Mar 07 01:18:45.368 INFO Fetch successful Mar 7 01:18:45.368116 coreos-metadata[936]: Mar 07 01:18:45.368 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 7 01:18:45.378359 coreos-metadata[936]: Mar 07 01:18:45.378 INFO Fetch successful Mar 7 01:18:45.384197 coreos-metadata[936]: Mar 07 01:18:45.384 INFO wrote hostname ci-4081.3.6-n-8271a56a8b to /sysroot/etc/hostname Mar 7 01:18:45.391270 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 7 01:18:45.453251 initrd-setup-root[948]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 01:18:45.493313 initrd-setup-root[955]: cut: /sysroot/etc/group: No such file or directory Mar 7 01:18:45.525872 initrd-setup-root[962]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 01:18:45.534042 initrd-setup-root[969]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 01:18:46.583052 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 01:18:46.597197 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 01:18:46.606277 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 01:18:46.616762 kernel: BTRFS info (device sda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:18:46.610208 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 01:18:46.651862 ignition[1036]: INFO : Ignition 2.19.0 Mar 7 01:18:46.651862 ignition[1036]: INFO : Stage: mount Mar 7 01:18:46.657801 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:18:46.657801 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:18:46.657801 ignition[1036]: INFO : mount: mount passed Mar 7 01:18:46.657801 ignition[1036]: INFO : Ignition finished successfully Mar 7 01:18:46.659055 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 01:18:46.678360 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 01:18:46.686881 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 01:18:46.700322 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:18:46.722120 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1048) Mar 7 01:18:46.729663 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:18:46.729748 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:18:46.732624 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:18:46.740115 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:18:46.742158 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:18:46.774107 ignition[1065]: INFO : Ignition 2.19.0 Mar 7 01:18:46.774107 ignition[1065]: INFO : Stage: files Mar 7 01:18:46.782237 ignition[1065]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:18:46.782237 ignition[1065]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:18:46.782237 ignition[1065]: DEBUG : files: compiled without relabeling support, skipping Mar 7 01:18:46.782237 ignition[1065]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 01:18:46.782237 ignition[1065]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 01:18:46.992603 ignition[1065]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 01:18:46.997053 ignition[1065]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 01:18:46.997053 ignition[1065]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 01:18:46.993084 unknown[1065]: wrote ssh authorized keys file for user: core Mar 7 01:18:47.022073 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:18:47.029176 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 7 01:18:47.072972 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 7 01:18:47.124871 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:18:47.124871 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 7 01:18:47.143499 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 01:18:47.143499 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:18:47.143499 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:18:47.143499 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:18:47.143499 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:18:47.143499 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:18:47.143499 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:18:47.143499 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:18:47.143499 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:18:47.143499 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 7 01:18:47.143499 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 7 01:18:47.143499 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 7 01:18:47.143499 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Mar 7 01:18:47.738886 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 7 01:18:49.782740 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 7 01:18:49.782740 ignition[1065]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 7 01:18:49.798251 ignition[1065]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:18:49.798251 ignition[1065]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:18:49.798251 ignition[1065]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 7 01:18:49.798251 ignition[1065]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 7 01:18:49.798251 ignition[1065]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 01:18:49.798251 ignition[1065]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:18:49.798251 ignition[1065]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:18:49.798251 ignition[1065]: INFO : files: files passed Mar 7 01:18:49.798251 ignition[1065]: INFO : Ignition finished successfully Mar 7 01:18:49.789215 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 01:18:49.814412 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 01:18:49.846824 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 01:18:49.860354 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 01:18:49.869661 initrd-setup-root-after-ignition[1092]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:18:49.869661 initrd-setup-root-after-ignition[1092]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:18:49.860523 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 01:18:49.892597 initrd-setup-root-after-ignition[1097]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:18:49.871405 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:18:49.878401 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 01:18:49.904386 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 01:18:49.935852 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 01:18:49.935984 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 01:18:49.946252 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 01:18:49.947434 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 01:18:49.948040 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 01:18:49.963409 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 01:18:49.977932 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:18:49.995379 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 01:18:50.011434 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:18:50.018694 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:18:50.025843 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 01:18:50.028891 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 01:18:50.029068 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:18:50.039084 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 01:18:50.048038 systemd[1]: Stopped target basic.target - Basic System. Mar 7 01:18:50.051272 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 01:18:50.056907 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:18:50.068204 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 01:18:50.069655 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 01:18:50.070224 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:18:50.070801 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 01:18:50.071363 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 01:18:50.071876 systemd[1]: Stopped target swap.target - Swaps. Mar 7 01:18:50.072384 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 01:18:50.072556 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:18:50.074033 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:18:50.075198 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:18:50.075674 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 01:18:50.099218 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:18:50.108760 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 01:18:50.110793 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 01:18:50.122705 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 01:18:50.123404 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:18:50.130549 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 01:18:50.130746 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 01:18:50.136193 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 7 01:18:50.198617 ignition[1117]: INFO : Ignition 2.19.0 Mar 7 01:18:50.198617 ignition[1117]: INFO : Stage: umount Mar 7 01:18:50.198617 ignition[1117]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:18:50.198617 ignition[1117]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:18:50.198617 ignition[1117]: INFO : umount: umount passed Mar 7 01:18:50.198617 ignition[1117]: INFO : Ignition finished successfully Mar 7 01:18:50.136385 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 7 01:18:50.162548 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 01:18:50.188443 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 01:18:50.191325 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 01:18:50.191541 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:18:50.198860 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 01:18:50.199003 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:18:50.207865 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 01:18:50.207984 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 01:18:50.224381 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 01:18:50.224485 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 01:18:50.229698 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 01:18:50.230797 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 01:18:50.230929 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 01:18:50.234353 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 01:18:50.234414 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 01:18:50.237395 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 7 01:18:50.237451 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 7 01:18:50.240575 systemd[1]: Stopped target network.target - Network. Mar 7 01:18:50.243197 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 01:18:50.243260 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:18:50.248497 systemd[1]: Stopped target paths.target - Path Units. Mar 7 01:18:50.256635 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 01:18:50.259082 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:18:50.260187 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 01:18:50.260734 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 01:18:50.261410 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 01:18:50.261462 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:18:50.261973 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 01:18:50.262014 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:18:50.263042 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 01:18:50.263100 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 01:18:50.263516 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 01:18:50.263556 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 01:18:50.264248 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 01:18:50.264470 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 01:18:50.438998 kernel: hv_netvsc 7ced8d2b-d0de-7ced-8d2b-d0de7ced8d2b eth0: Data path switched from VF: enP24986s1 Mar 7 01:18:50.300291 systemd-networkd[876]: eth0: DHCPv6 lease lost Mar 7 01:18:50.304166 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 01:18:50.304825 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 01:18:50.309879 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 01:18:50.309923 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:18:50.326926 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 01:18:50.332050 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 01:18:50.332134 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:18:50.335786 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:18:50.337042 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 01:18:50.337355 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 01:18:50.377368 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 01:18:50.377462 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:18:50.380022 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 01:18:50.380078 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 01:18:50.399656 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 01:18:50.399729 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:18:50.406580 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 01:18:50.406736 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:18:50.413082 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 01:18:50.413427 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 01:18:50.418584 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 01:18:50.418630 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:18:50.424476 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 01:18:50.424541 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:18:50.437825 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 01:18:50.437888 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 01:18:50.443178 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:18:50.443254 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:18:50.466370 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 01:18:50.469603 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 01:18:50.469701 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:18:50.477140 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 7 01:18:50.477220 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:18:50.555075 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 01:18:50.555610 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:18:50.563414 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:18:50.563484 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:18:50.575270 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 01:18:50.575407 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 01:18:50.580958 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 01:18:50.581067 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 01:18:50.692280 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 01:18:50.692390 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 01:18:50.694251 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 01:18:50.694542 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 01:18:50.694595 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 01:18:50.706385 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 01:18:50.797042 systemd[1]: Switching root. Mar 7 01:18:50.840936 systemd-journald[177]: Journal stopped Mar 7 01:18:54.287352 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Mar 7 01:18:54.287384 kernel: SELinux: policy capability network_peer_controls=1 Mar 7 01:18:54.287401 kernel: SELinux: policy capability open_perms=1 Mar 7 01:18:54.287410 kernel: SELinux: policy capability extended_socket_class=1 Mar 7 01:18:54.287423 kernel: SELinux: policy capability always_check_network=0 Mar 7 01:18:54.287431 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 7 01:18:54.287440 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 7 01:18:54.287453 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 7 01:18:54.287464 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 7 01:18:54.287477 kernel: audit: type=1403 audit(1772846331.633:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 7 01:18:54.287487 systemd[1]: Successfully loaded SELinux policy in 69.330ms. Mar 7 01:18:54.287501 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.203ms. Mar 7 01:18:54.287512 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:18:54.287526 systemd[1]: Detected virtualization microsoft. Mar 7 01:18:54.287541 systemd[1]: Detected architecture x86-64. Mar 7 01:18:54.287555 systemd[1]: Detected first boot. Mar 7 01:18:54.287565 systemd[1]: Hostname set to . Mar 7 01:18:54.287579 systemd[1]: Initializing machine ID from random generator. Mar 7 01:18:54.287593 zram_generator::config[1159]: No configuration found. Mar 7 01:18:54.287606 systemd[1]: Populated /etc with preset unit settings. Mar 7 01:18:54.287620 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 7 01:18:54.287630 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 7 01:18:54.287644 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 7 01:18:54.287654 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 7 01:18:54.287668 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 7 01:18:54.287683 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 7 01:18:54.287700 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 7 01:18:54.287714 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 7 01:18:54.287730 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 7 01:18:54.287748 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 7 01:18:54.287767 systemd[1]: Created slice user.slice - User and Session Slice. Mar 7 01:18:54.287789 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:18:54.287808 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:18:54.287828 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 7 01:18:54.287855 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 7 01:18:54.287877 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 7 01:18:54.287895 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:18:54.287915 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 7 01:18:54.287935 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:18:54.287959 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 7 01:18:54.287987 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 7 01:18:54.288009 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 7 01:18:54.288034 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 7 01:18:54.288059 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:18:54.288080 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:18:54.290133 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:18:54.290165 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:18:54.290184 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 7 01:18:54.290202 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 7 01:18:54.290219 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:18:54.290241 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:18:54.290263 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:18:54.290281 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 7 01:18:54.290299 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 7 01:18:54.290316 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 7 01:18:54.290337 systemd[1]: Mounting media.mount - External Media Directory... Mar 7 01:18:54.290355 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:18:54.290371 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 7 01:18:54.290388 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 7 01:18:54.290405 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 7 01:18:54.290423 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 7 01:18:54.290439 systemd[1]: Reached target machines.target - Containers. Mar 7 01:18:54.290455 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 7 01:18:54.290477 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:18:54.290492 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:18:54.290509 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 7 01:18:54.290525 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:18:54.290544 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:18:54.290561 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:18:54.290579 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 7 01:18:54.290597 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:18:54.290619 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 7 01:18:54.290636 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 7 01:18:54.290654 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 7 01:18:54.290669 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 7 01:18:54.290686 systemd[1]: Stopped systemd-fsck-usr.service. Mar 7 01:18:54.290704 kernel: loop: module loaded Mar 7 01:18:54.290723 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:18:54.290742 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:18:54.290760 kernel: fuse: init (API version 7.39) Mar 7 01:18:54.290781 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 7 01:18:54.290798 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 7 01:18:54.290815 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:18:54.290833 systemd[1]: verity-setup.service: Deactivated successfully. Mar 7 01:18:54.290850 systemd[1]: Stopped verity-setup.service. Mar 7 01:18:54.290898 systemd-journald[1251]: Collecting audit messages is disabled. Mar 7 01:18:54.290939 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:18:54.290959 systemd-journald[1251]: Journal started Mar 7 01:18:54.290994 systemd-journald[1251]: Runtime Journal (/run/log/journal/be44e9d36d9244598e3148d640dda0dd) is 8.0M, max 158.7M, 150.7M free. Mar 7 01:18:54.308275 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 7 01:18:54.308357 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 7 01:18:53.451412 systemd[1]: Queued start job for default target multi-user.target. Mar 7 01:18:53.680084 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 7 01:18:53.680542 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 7 01:18:54.325655 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:18:54.329049 systemd[1]: Mounted media.mount - External Media Directory. Mar 7 01:18:54.332560 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 7 01:18:54.350135 kernel: ACPI: bus type drm_connector registered Mar 7 01:18:54.338649 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 7 01:18:54.344354 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 7 01:18:54.348297 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:18:54.354028 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 7 01:18:54.358723 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 7 01:18:54.359076 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 7 01:18:54.363463 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:18:54.363787 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:18:54.368045 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:18:54.368336 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:18:54.372521 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:18:54.372846 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:18:54.377229 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 7 01:18:54.377497 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 7 01:18:54.381793 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:18:54.382272 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:18:54.391701 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:18:54.395829 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 01:18:54.400471 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 7 01:18:54.420886 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 7 01:18:54.429233 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 7 01:18:54.434247 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 7 01:18:54.437798 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 7 01:18:54.437950 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:18:54.442560 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 7 01:18:54.452331 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 7 01:18:54.456927 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 7 01:18:54.459957 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:18:54.467727 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 7 01:18:54.472346 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 7 01:18:54.476176 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:18:54.481178 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 7 01:18:54.484679 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:18:54.490208 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:18:54.498340 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 7 01:18:54.510319 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:18:54.519275 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:18:54.527747 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 7 01:18:54.531603 systemd-journald[1251]: Time spent on flushing to /var/log/journal/be44e9d36d9244598e3148d640dda0dd is 320.102ms for 955 entries. Mar 7 01:18:54.531603 systemd-journald[1251]: System Journal (/var/log/journal/be44e9d36d9244598e3148d640dda0dd) is 8.0M, max 2.6G, 2.6G free. Mar 7 01:19:00.491615 systemd-journald[1251]: Received client request to flush runtime journal. Mar 7 01:19:00.491707 kernel: loop0: detected capacity change from 0 to 140768 Mar 7 01:19:00.491732 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 7 01:19:00.491750 kernel: loop1: detected capacity change from 0 to 31056 Mar 7 01:18:54.531534 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 7 01:18:54.538153 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 7 01:18:54.544451 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 7 01:18:54.552868 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 7 01:18:54.565382 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 7 01:18:54.570301 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 7 01:18:54.600246 udevadm[1305]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 7 01:18:54.845037 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:18:55.598700 systemd-tmpfiles[1296]: ACLs are not supported, ignoring. Mar 7 01:18:55.598713 systemd-tmpfiles[1296]: ACLs are not supported, ignoring. Mar 7 01:18:55.605640 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:18:55.615337 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 7 01:18:56.597647 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 7 01:18:56.606302 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:18:56.622364 systemd-tmpfiles[1312]: ACLs are not supported, ignoring. Mar 7 01:18:56.622378 systemd-tmpfiles[1312]: ACLs are not supported, ignoring. Mar 7 01:18:56.626445 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:18:59.882754 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 7 01:18:59.890427 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:18:59.916036 systemd-udevd[1317]: Using default interface naming scheme 'v255'. Mar 7 01:19:00.493326 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 7 01:19:01.449124 kernel: loop2: detected capacity change from 0 to 142488 Mar 7 01:19:02.141479 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:19:02.154968 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:19:02.202283 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 7 01:19:02.653126 kernel: mousedev: PS/2 mouse device common for all mice Mar 7 01:19:03.239686 kernel: hv_vmbus: registering driver hyperv_fb Mar 7 01:19:03.239750 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Mar 7 01:19:03.239777 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Mar 7 01:19:03.239795 kernel: Console: switching to colour dummy device 80x25 Mar 7 01:19:03.239812 kernel: Console: switching to colour frame buffer device 128x48 Mar 7 01:19:03.239829 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#166 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 7 01:19:03.240058 kernel: hv_vmbus: registering driver hv_balloon Mar 7 01:19:03.240076 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Mar 7 01:19:02.693861 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 7 01:19:02.694693 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 7 01:19:03.055395 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:19:03.065295 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:19:03.065471 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:19:03.074298 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:19:03.298659 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 7 01:19:03.327125 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Mar 7 01:19:03.454030 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 7 01:19:03.757928 systemd-networkd[1329]: lo: Link UP Mar 7 01:19:03.757940 systemd-networkd[1329]: lo: Gained carrier Mar 7 01:19:03.761283 systemd-networkd[1329]: Enumeration completed Mar 7 01:19:03.761567 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:19:03.874402 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1333) Mar 7 01:19:03.874462 kernel: mlx5_core 619a:00:02.0 enP24986s1: Link up Mar 7 01:19:03.874766 kernel: hv_netvsc 7ced8d2b-d0de-7ced-8d2b-d0de7ced8d2b eth0: Data path switched to VF: enP24986s1 Mar 7 01:19:03.765723 systemd-networkd[1329]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:19:03.765729 systemd-networkd[1329]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:19:03.774430 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 7 01:19:03.860215 systemd-networkd[1329]: enP24986s1: Link UP Mar 7 01:19:03.860370 systemd-networkd[1329]: eth0: Link UP Mar 7 01:19:03.860376 systemd-networkd[1329]: eth0: Gained carrier Mar 7 01:19:03.860395 systemd-networkd[1329]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:19:03.864654 systemd-networkd[1329]: enP24986s1: Gained carrier Mar 7 01:19:03.870636 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 7 01:19:03.880282 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 7 01:19:03.889154 systemd-networkd[1329]: eth0: DHCPv4 address 10.200.8.18/24, gateway 10.200.8.1 acquired from 168.63.129.16 Mar 7 01:19:04.642926 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 7 01:19:05.451869 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 7 01:19:05.458434 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 7 01:19:05.736247 systemd-networkd[1329]: eth0: Gained IPv6LL Mar 7 01:19:05.738872 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 7 01:19:06.009233 kernel: loop3: detected capacity change from 0 to 217752 Mar 7 01:19:06.050796 lvm[1417]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:19:06.052318 kernel: loop4: detected capacity change from 0 to 140768 Mar 7 01:19:06.160249 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 7 01:19:06.165178 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:19:06.175255 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 7 01:19:06.191495 lvm[1422]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:19:06.213128 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 7 01:19:06.715121 kernel: loop5: detected capacity change from 0 to 31056 Mar 7 01:19:06.803252 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:19:07.702118 kernel: loop6: detected capacity change from 0 to 142488 Mar 7 01:19:07.798122 kernel: loop7: detected capacity change from 0 to 217752 Mar 7 01:19:07.811555 (sd-merge)[1420]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Mar 7 01:19:07.812234 (sd-merge)[1420]: Merged extensions into '/usr'. Mar 7 01:19:07.816036 systemd[1]: Reloading requested from client PID 1295 ('systemd-sysext') (unit systemd-sysext.service)... Mar 7 01:19:07.816054 systemd[1]: Reloading... Mar 7 01:19:07.880165 zram_generator::config[1453]: No configuration found. Mar 7 01:19:08.030418 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:19:08.116477 systemd[1]: Reloading finished in 299 ms. Mar 7 01:19:08.149658 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 7 01:19:08.163307 systemd[1]: Starting ensure-sysext.service... Mar 7 01:19:08.169301 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:19:08.175607 systemd[1]: Reloading requested from client PID 1511 ('systemctl') (unit ensure-sysext.service)... Mar 7 01:19:08.175775 systemd[1]: Reloading... Mar 7 01:19:08.255172 zram_generator::config[1536]: No configuration found. Mar 7 01:19:08.340586 systemd-tmpfiles[1512]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 7 01:19:08.341116 systemd-tmpfiles[1512]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 7 01:19:08.342394 systemd-tmpfiles[1512]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 7 01:19:08.342847 systemd-tmpfiles[1512]: ACLs are not supported, ignoring. Mar 7 01:19:08.342942 systemd-tmpfiles[1512]: ACLs are not supported, ignoring. Mar 7 01:19:08.351868 systemd-tmpfiles[1512]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:19:08.351887 systemd-tmpfiles[1512]: Skipping /boot Mar 7 01:19:08.363837 systemd-tmpfiles[1512]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:19:08.363858 systemd-tmpfiles[1512]: Skipping /boot Mar 7 01:19:08.415279 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:19:08.492204 systemd[1]: Reloading finished in 315 ms. Mar 7 01:19:08.522674 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:19:08.545449 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:19:08.551019 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 7 01:19:08.559228 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 7 01:19:08.566436 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:19:08.573395 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 7 01:19:08.584143 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:19:08.584627 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:19:08.590379 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:19:08.596518 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:19:08.603202 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:19:08.606165 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:19:08.606326 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:19:08.607628 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:19:08.607828 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:19:08.612575 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:19:08.612757 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:19:08.616964 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:19:08.617158 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:19:08.632083 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:19:08.632767 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:19:08.641899 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:19:08.648408 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:19:08.658404 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:19:08.668385 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:19:08.672530 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:19:08.672815 systemd[1]: Reached target time-set.target - System Time Set. Mar 7 01:19:08.677569 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:19:08.680419 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:19:08.680646 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:19:08.684911 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:19:08.686205 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:19:08.690727 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:19:08.691689 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:19:08.713400 systemd[1]: Finished ensure-sysext.service. Mar 7 01:19:08.717069 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:19:08.717270 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:19:08.722084 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:19:08.723271 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:19:08.723659 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 7 01:19:08.798918 systemd-resolved[1604]: Positive Trust Anchors: Mar 7 01:19:08.798936 systemd-resolved[1604]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:19:08.799002 systemd-resolved[1604]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:19:08.802357 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 7 01:19:08.851351 systemd-resolved[1604]: Using system hostname 'ci-4081.3.6-n-8271a56a8b'. Mar 7 01:19:08.853173 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:19:08.856826 systemd[1]: Reached target network.target - Network. Mar 7 01:19:08.860015 systemd[1]: Reached target network-online.target - Network is Online. Mar 7 01:19:08.863206 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:19:09.293777 augenrules[1638]: No rules Mar 7 01:19:09.295306 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:19:10.499817 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 7 01:19:10.504321 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 7 01:19:12.566779 ldconfig[1290]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 7 01:19:12.582080 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 7 01:19:12.591344 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 7 01:19:12.605950 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 7 01:19:12.609793 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:19:12.613265 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 7 01:19:12.616956 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 7 01:19:12.620827 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 7 01:19:12.623883 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 7 01:19:12.627599 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 7 01:19:12.631388 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 7 01:19:12.631441 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:19:12.634143 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:19:12.637865 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 7 01:19:12.642615 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 7 01:19:12.654178 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 7 01:19:12.658450 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 7 01:19:12.661748 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:19:12.664431 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:19:12.667068 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:19:12.667112 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:19:12.674209 systemd[1]: Starting chronyd.service - NTP client/server... Mar 7 01:19:12.681240 systemd[1]: Starting containerd.service - containerd container runtime... Mar 7 01:19:12.694288 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 7 01:19:12.701253 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 7 01:19:12.707902 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 7 01:19:12.724162 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 7 01:19:12.728358 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 7 01:19:12.728418 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Mar 7 01:19:12.730780 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Mar 7 01:19:12.734442 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Mar 7 01:19:12.737400 (chronyd)[1650]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Mar 7 01:19:12.756261 chronyd[1662]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Mar 7 01:19:12.737494 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:19:12.757016 jq[1656]: false Mar 7 01:19:12.753481 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 7 01:19:12.757126 KVP[1658]: KVP starting; pid is:1658 Mar 7 01:19:12.762315 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 7 01:19:12.766215 chronyd[1662]: Timezone right/UTC failed leap second check, ignoring Mar 7 01:19:12.768212 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 7 01:19:12.766434 chronyd[1662]: Loaded seccomp filter (level 2) Mar 7 01:19:12.782315 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 7 01:19:12.792122 kernel: hv_utils: KVP IC version 4.0 Mar 7 01:19:12.792232 KVP[1658]: KVP LIC Version: 3.1 Mar 7 01:19:12.798296 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 7 01:19:12.808660 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 7 01:19:12.812249 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 7 01:19:12.812940 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 7 01:19:12.814350 systemd[1]: Starting update-engine.service - Update Engine... Mar 7 01:19:12.827322 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 7 01:19:12.832801 systemd[1]: Started chronyd.service - NTP client/server. Mar 7 01:19:12.846685 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 7 01:19:12.847924 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 7 01:19:12.852276 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 7 01:19:12.852538 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 7 01:19:12.884169 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 7 01:19:12.905781 jq[1674]: true Mar 7 01:19:12.919605 extend-filesystems[1657]: Found loop4 Mar 7 01:19:12.919605 extend-filesystems[1657]: Found loop5 Mar 7 01:19:12.919605 extend-filesystems[1657]: Found loop6 Mar 7 01:19:12.919605 extend-filesystems[1657]: Found loop7 Mar 7 01:19:12.919605 extend-filesystems[1657]: Found sda Mar 7 01:19:12.919605 extend-filesystems[1657]: Found sda1 Mar 7 01:19:12.919605 extend-filesystems[1657]: Found sda2 Mar 7 01:19:12.919605 extend-filesystems[1657]: Found sda3 Mar 7 01:19:12.919605 extend-filesystems[1657]: Found usr Mar 7 01:19:12.919605 extend-filesystems[1657]: Found sda4 Mar 7 01:19:12.919605 extend-filesystems[1657]: Found sda6 Mar 7 01:19:12.919605 extend-filesystems[1657]: Found sda7 Mar 7 01:19:12.919605 extend-filesystems[1657]: Found sda9 Mar 7 01:19:12.919605 extend-filesystems[1657]: Checking size of /dev/sda9 Mar 7 01:19:13.000778 extend-filesystems[1657]: Old size kept for /dev/sda9 Mar 7 01:19:13.000778 extend-filesystems[1657]: Found sr0 Mar 7 01:19:12.942402 dbus-daemon[1654]: [system] SELinux support is enabled Mar 7 01:19:13.002170 update_engine[1673]: I20260307 01:19:12.939834 1673 main.cc:92] Flatcar Update Engine starting Mar 7 01:19:13.002170 update_engine[1673]: I20260307 01:19:12.977070 1673 update_check_scheduler.cc:74] Next update check in 6m26s Mar 7 01:19:12.933981 systemd[1]: motdgen.service: Deactivated successfully. Mar 7 01:19:12.934319 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 7 01:19:13.002640 jq[1697]: true Mar 7 01:19:12.943110 systemd-logind[1672]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 7 01:19:12.946177 systemd-logind[1672]: New seat seat0. Mar 7 01:19:12.946362 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 7 01:19:12.963518 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 7 01:19:12.963553 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 7 01:19:12.965428 (ntainerd)[1696]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 7 01:19:12.968427 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 7 01:19:12.968455 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 7 01:19:12.996596 systemd[1]: Started systemd-logind.service - User Login Management. Mar 7 01:19:13.012916 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 7 01:19:13.013147 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 7 01:19:13.030069 systemd[1]: Started update-engine.service - Update Engine. Mar 7 01:19:13.030814 dbus-daemon[1654]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 7 01:19:13.047465 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 7 01:19:13.063514 tar[1678]: linux-amd64/LICENSE Mar 7 01:19:13.066601 tar[1678]: linux-amd64/helm Mar 7 01:19:13.094964 coreos-metadata[1652]: Mar 07 01:19:13.093 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 7 01:19:13.119957 coreos-metadata[1652]: Mar 07 01:19:13.117 INFO Fetch successful Mar 7 01:19:13.119957 coreos-metadata[1652]: Mar 07 01:19:13.117 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Mar 7 01:19:13.134937 coreos-metadata[1652]: Mar 07 01:19:13.123 INFO Fetch successful Mar 7 01:19:13.134937 coreos-metadata[1652]: Mar 07 01:19:13.123 INFO Fetching http://168.63.129.16/machine/99ff87ed-5e76-4cb9-8a87-89ad05d7fa4f/7a681689%2Dc0a8%2D4d51%2D9689%2D66ef5388aab4.%5Fci%2D4081.3.6%2Dn%2D8271a56a8b?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Mar 7 01:19:13.136508 coreos-metadata[1652]: Mar 07 01:19:13.136 INFO Fetch successful Mar 7 01:19:13.136574 coreos-metadata[1652]: Mar 07 01:19:13.136 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Mar 7 01:19:13.154495 coreos-metadata[1652]: Mar 07 01:19:13.151 INFO Fetch successful Mar 7 01:19:13.221373 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 7 01:19:13.225679 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 7 01:19:13.243147 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1718) Mar 7 01:19:13.285028 bash[1745]: Updated "/home/core/.ssh/authorized_keys" Mar 7 01:19:13.285504 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 7 01:19:13.300520 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 7 01:19:13.313370 locksmithd[1714]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 7 01:19:13.668854 sshd_keygen[1680]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 7 01:19:13.703575 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 7 01:19:13.714515 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 7 01:19:13.727189 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Mar 7 01:19:13.769720 systemd[1]: issuegen.service: Deactivated successfully. Mar 7 01:19:13.769953 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 7 01:19:13.791985 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 7 01:19:13.802079 containerd[1696]: time="2026-03-07T01:19:13.801987200Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 7 01:19:13.831979 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Mar 7 01:19:13.850602 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 7 01:19:13.865154 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 7 01:19:13.879803 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 7 01:19:13.886769 systemd[1]: Reached target getty.target - Login Prompts. Mar 7 01:19:13.914678 containerd[1696]: time="2026-03-07T01:19:13.914047600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:19:13.917519 containerd[1696]: time="2026-03-07T01:19:13.916984300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:19:13.917519 containerd[1696]: time="2026-03-07T01:19:13.917040200Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 7 01:19:13.917519 containerd[1696]: time="2026-03-07T01:19:13.917063800Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 7 01:19:13.917519 containerd[1696]: time="2026-03-07T01:19:13.917268900Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 7 01:19:13.917519 containerd[1696]: time="2026-03-07T01:19:13.917291000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 7 01:19:13.917519 containerd[1696]: time="2026-03-07T01:19:13.917363200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:19:13.917519 containerd[1696]: time="2026-03-07T01:19:13.917381200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:19:13.917840 containerd[1696]: time="2026-03-07T01:19:13.917586200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:19:13.917840 containerd[1696]: time="2026-03-07T01:19:13.917607300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 7 01:19:13.917840 containerd[1696]: time="2026-03-07T01:19:13.917625100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:19:13.917840 containerd[1696]: time="2026-03-07T01:19:13.917637100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 7 01:19:13.917840 containerd[1696]: time="2026-03-07T01:19:13.917735800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:19:13.918187 containerd[1696]: time="2026-03-07T01:19:13.918048500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:19:13.918351 containerd[1696]: time="2026-03-07T01:19:13.918321400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:19:13.918351 containerd[1696]: time="2026-03-07T01:19:13.918347100Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 7 01:19:13.918483 containerd[1696]: time="2026-03-07T01:19:13.918459400Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 7 01:19:13.918550 containerd[1696]: time="2026-03-07T01:19:13.918529000Z" level=info msg="metadata content store policy set" policy=shared Mar 7 01:19:14.001642 containerd[1696]: time="2026-03-07T01:19:14.001364100Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 7 01:19:14.001642 containerd[1696]: time="2026-03-07T01:19:14.001444800Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 7 01:19:14.001642 containerd[1696]: time="2026-03-07T01:19:14.001467200Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 7 01:19:14.001642 containerd[1696]: time="2026-03-07T01:19:14.001489200Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 7 01:19:14.001642 containerd[1696]: time="2026-03-07T01:19:14.001509900Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 7 01:19:14.001889 containerd[1696]: time="2026-03-07T01:19:14.001743100Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 7 01:19:14.003113 containerd[1696]: time="2026-03-07T01:19:14.002108800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 7 01:19:14.003113 containerd[1696]: time="2026-03-07T01:19:14.002279400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 7 01:19:14.003113 containerd[1696]: time="2026-03-07T01:19:14.002302100Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 7 01:19:14.003113 containerd[1696]: time="2026-03-07T01:19:14.002321400Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 7 01:19:14.003113 containerd[1696]: time="2026-03-07T01:19:14.002340500Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 7 01:19:14.003113 containerd[1696]: time="2026-03-07T01:19:14.002358800Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 7 01:19:14.003113 containerd[1696]: time="2026-03-07T01:19:14.002376500Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 7 01:19:14.003113 containerd[1696]: time="2026-03-07T01:19:14.002396700Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 7 01:19:14.003113 containerd[1696]: time="2026-03-07T01:19:14.002416500Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 7 01:19:14.003113 containerd[1696]: time="2026-03-07T01:19:14.002435500Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 7 01:19:14.003113 containerd[1696]: time="2026-03-07T01:19:14.002453500Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 7 01:19:14.003113 containerd[1696]: time="2026-03-07T01:19:14.002472000Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 7 01:19:14.003113 containerd[1696]: time="2026-03-07T01:19:14.002508700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 7 01:19:14.003113 containerd[1696]: time="2026-03-07T01:19:14.002530200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 7 01:19:14.003654 containerd[1696]: time="2026-03-07T01:19:14.002547700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 7 01:19:14.003654 containerd[1696]: time="2026-03-07T01:19:14.002572300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 7 01:19:14.003654 containerd[1696]: time="2026-03-07T01:19:14.002591700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 7 01:19:14.003654 containerd[1696]: time="2026-03-07T01:19:14.002611200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 7 01:19:14.003654 containerd[1696]: time="2026-03-07T01:19:14.002628800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 7 01:19:14.003654 containerd[1696]: time="2026-03-07T01:19:14.002646800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 7 01:19:14.003654 containerd[1696]: time="2026-03-07T01:19:14.002664100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 7 01:19:14.003654 containerd[1696]: time="2026-03-07T01:19:14.002692100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 7 01:19:14.003654 containerd[1696]: time="2026-03-07T01:19:14.002712500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 7 01:19:14.003654 containerd[1696]: time="2026-03-07T01:19:14.002730500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 7 01:19:14.003654 containerd[1696]: time="2026-03-07T01:19:14.002748100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 7 01:19:14.003654 containerd[1696]: time="2026-03-07T01:19:14.002769700Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 7 01:19:14.003654 containerd[1696]: time="2026-03-07T01:19:14.002802600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 7 01:19:14.003654 containerd[1696]: time="2026-03-07T01:19:14.002818600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 7 01:19:14.003654 containerd[1696]: time="2026-03-07T01:19:14.002835000Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 7 01:19:14.004225 containerd[1696]: time="2026-03-07T01:19:14.002886100Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 7 01:19:14.004225 containerd[1696]: time="2026-03-07T01:19:14.002909900Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 7 01:19:14.004225 containerd[1696]: time="2026-03-07T01:19:14.002927100Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 7 01:19:14.004225 containerd[1696]: time="2026-03-07T01:19:14.002945000Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 7 01:19:14.004225 containerd[1696]: time="2026-03-07T01:19:14.002959900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 7 01:19:14.004225 containerd[1696]: time="2026-03-07T01:19:14.002982200Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 7 01:19:14.004225 containerd[1696]: time="2026-03-07T01:19:14.002996900Z" level=info msg="NRI interface is disabled by configuration." Mar 7 01:19:14.004225 containerd[1696]: time="2026-03-07T01:19:14.003014400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 7 01:19:14.008829 containerd[1696]: time="2026-03-07T01:19:14.007911600Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 7 01:19:14.008829 containerd[1696]: time="2026-03-07T01:19:14.008014600Z" level=info msg="Connect containerd service" Mar 7 01:19:14.008829 containerd[1696]: time="2026-03-07T01:19:14.008076100Z" level=info msg="using legacy CRI server" Mar 7 01:19:14.008829 containerd[1696]: time="2026-03-07T01:19:14.008088800Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 7 01:19:14.008829 containerd[1696]: time="2026-03-07T01:19:14.008256700Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 7 01:19:14.009262 containerd[1696]: time="2026-03-07T01:19:14.009042100Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 01:19:14.013969 containerd[1696]: time="2026-03-07T01:19:14.013412300Z" level=info msg="Start subscribing containerd event" Mar 7 01:19:14.013969 containerd[1696]: time="2026-03-07T01:19:14.013488200Z" level=info msg="Start recovering state" Mar 7 01:19:14.013969 containerd[1696]: time="2026-03-07T01:19:14.013566200Z" level=info msg="Start event monitor" Mar 7 01:19:14.013969 containerd[1696]: time="2026-03-07T01:19:14.013588700Z" level=info msg="Start snapshots syncer" Mar 7 01:19:14.013969 containerd[1696]: time="2026-03-07T01:19:14.013605500Z" level=info msg="Start cni network conf syncer for default" Mar 7 01:19:14.013969 containerd[1696]: time="2026-03-07T01:19:14.013617900Z" level=info msg="Start streaming server" Mar 7 01:19:14.014253 containerd[1696]: time="2026-03-07T01:19:14.013990000Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 7 01:19:14.014253 containerd[1696]: time="2026-03-07T01:19:14.014143600Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 7 01:19:14.022734 containerd[1696]: time="2026-03-07T01:19:14.018000100Z" level=info msg="containerd successfully booted in 0.218278s" Mar 7 01:19:14.018240 systemd[1]: Started containerd.service - containerd container runtime. Mar 7 01:19:14.210553 tar[1678]: linux-amd64/README.md Mar 7 01:19:14.225364 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 7 01:19:14.768283 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:19:14.774156 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 7 01:19:14.777619 systemd[1]: Startup finished in 1.080s (kernel) + 14.347s (initrd) + 23.211s (userspace) = 38.639s. Mar 7 01:19:14.777855 (kubelet)[1811]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:19:14.956464 waagent[1796]: 2026-03-07T01:19:14.956326Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Mar 7 01:19:14.958408 waagent[1796]: 2026-03-07T01:19:14.958343Z INFO Daemon Daemon OS: flatcar 4081.3.6 Mar 7 01:19:14.959381 waagent[1796]: 2026-03-07T01:19:14.959336Z INFO Daemon Daemon Python: 3.11.9 Mar 7 01:19:14.960051 waagent[1796]: 2026-03-07T01:19:14.960002Z INFO Daemon Daemon Run daemon Mar 7 01:19:14.960426 waagent[1796]: 2026-03-07T01:19:14.960386Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Mar 7 01:19:14.961261 waagent[1796]: 2026-03-07T01:19:14.960754Z INFO Daemon Daemon Using waagent for provisioning Mar 7 01:19:14.961966 waagent[1796]: 2026-03-07T01:19:14.961924Z INFO Daemon Daemon Activate resource disk Mar 7 01:19:14.962989 waagent[1796]: 2026-03-07T01:19:14.962948Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Mar 7 01:19:14.967284 waagent[1796]: 2026-03-07T01:19:14.967237Z INFO Daemon Daemon Found device: None Mar 7 01:19:14.967993 waagent[1796]: 2026-03-07T01:19:14.967953Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Mar 7 01:19:14.969045 waagent[1796]: 2026-03-07T01:19:14.969006Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Mar 7 01:19:14.971195 waagent[1796]: 2026-03-07T01:19:14.971149Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 7 01:19:14.973106 waagent[1796]: 2026-03-07T01:19:14.972078Z INFO Daemon Daemon Running default provisioning handler Mar 7 01:19:15.002181 waagent[1796]: 2026-03-07T01:19:15.002063Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Mar 7 01:19:15.010156 waagent[1796]: 2026-03-07T01:19:15.010043Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 7 01:19:15.020586 waagent[1796]: 2026-03-07T01:19:15.011401Z INFO Daemon Daemon cloud-init is enabled: False Mar 7 01:19:15.020586 waagent[1796]: 2026-03-07T01:19:15.012033Z INFO Daemon Daemon Copying ovf-env.xml Mar 7 01:19:15.069160 login[1798]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 7 01:19:15.071707 login[1799]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 7 01:19:15.075440 waagent[1796]: 2026-03-07T01:19:15.075270Z INFO Daemon Daemon Successfully mounted dvd Mar 7 01:19:15.092704 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 7 01:19:15.101846 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 7 01:19:15.105328 systemd-logind[1672]: New session 2 of user core. Mar 7 01:19:15.109338 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Mar 7 01:19:15.113729 systemd-logind[1672]: New session 1 of user core. Mar 7 01:19:15.117127 waagent[1796]: 2026-03-07T01:19:15.114462Z INFO Daemon Daemon Detect protocol endpoint Mar 7 01:19:15.117968 waagent[1796]: 2026-03-07T01:19:15.117762Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 7 01:19:15.122927 waagent[1796]: 2026-03-07T01:19:15.122841Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Mar 7 01:19:15.127351 waagent[1796]: 2026-03-07T01:19:15.127277Z INFO Daemon Daemon Test for route to 168.63.129.16 Mar 7 01:19:15.132150 waagent[1796]: 2026-03-07T01:19:15.130656Z INFO Daemon Daemon Route to 168.63.129.16 exists Mar 7 01:19:15.135118 waagent[1796]: 2026-03-07T01:19:15.134137Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Mar 7 01:19:15.136652 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 7 01:19:15.145361 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 7 01:19:15.159269 (systemd)[1829]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 7 01:19:15.162477 waagent[1796]: 2026-03-07T01:19:15.160628Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Mar 7 01:19:15.162477 waagent[1796]: 2026-03-07T01:19:15.162070Z INFO Daemon Daemon Wire protocol version:2012-11-30 Mar 7 01:19:15.164630 waagent[1796]: 2026-03-07T01:19:15.164577Z INFO Daemon Daemon Server preferred version:2015-04-05 Mar 7 01:19:15.384611 systemd[1829]: Queued start job for default target default.target. Mar 7 01:19:15.389742 systemd[1829]: Created slice app.slice - User Application Slice. Mar 7 01:19:15.389786 systemd[1829]: Reached target paths.target - Paths. Mar 7 01:19:15.389805 systemd[1829]: Reached target timers.target - Timers. Mar 7 01:19:15.392234 systemd[1829]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 7 01:19:15.408813 systemd[1829]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 7 01:19:15.408891 systemd[1829]: Reached target sockets.target - Sockets. Mar 7 01:19:15.408911 systemd[1829]: Reached target basic.target - Basic System. Mar 7 01:19:15.408961 systemd[1829]: Reached target default.target - Main User Target. Mar 7 01:19:15.408997 systemd[1829]: Startup finished in 231ms. Mar 7 01:19:15.409299 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 7 01:19:15.414280 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 7 01:19:15.416601 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 7 01:19:15.539940 waagent[1796]: 2026-03-07T01:19:15.539563Z INFO Daemon Daemon Initializing goal state during protocol detection Mar 7 01:19:15.545069 waagent[1796]: 2026-03-07T01:19:15.543981Z INFO Daemon Daemon Forcing an update of the goal state. Mar 7 01:19:15.550290 waagent[1796]: 2026-03-07T01:19:15.550225Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 7 01:19:15.565516 waagent[1796]: 2026-03-07T01:19:15.565456Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.179 Mar 7 01:19:15.569453 waagent[1796]: 2026-03-07T01:19:15.569033Z INFO Daemon Mar 7 01:19:15.572337 waagent[1796]: 2026-03-07T01:19:15.570954Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: ff17ab1d-4423-454a-97d7-b2effbc1261f eTag: 2564449759819422671 source: Fabric] Mar 7 01:19:15.572828 waagent[1796]: 2026-03-07T01:19:15.572781Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Mar 7 01:19:15.574518 waagent[1796]: 2026-03-07T01:19:15.574476Z INFO Daemon Mar 7 01:19:15.574896 waagent[1796]: 2026-03-07T01:19:15.574858Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Mar 7 01:19:15.588265 waagent[1796]: 2026-03-07T01:19:15.587884Z INFO Daemon Daemon Downloading artifacts profile blob Mar 7 01:19:15.667410 waagent[1796]: 2026-03-07T01:19:15.667256Z INFO Daemon Downloaded certificate {'thumbprint': '57939EE3F041BB86A5768F6A1FC0376952B49151', 'hasPrivateKey': True} Mar 7 01:19:15.673607 waagent[1796]: 2026-03-07T01:19:15.673533Z INFO Daemon Fetch goal state completed Mar 7 01:19:15.681462 waagent[1796]: 2026-03-07T01:19:15.681413Z INFO Daemon Daemon Starting provisioning Mar 7 01:19:15.683168 waagent[1796]: 2026-03-07T01:19:15.683102Z INFO Daemon Daemon Handle ovf-env.xml. Mar 7 01:19:15.684249 waagent[1796]: 2026-03-07T01:19:15.684205Z INFO Daemon Daemon Set hostname [ci-4081.3.6-n-8271a56a8b] Mar 7 01:19:15.690665 waagent[1796]: 2026-03-07T01:19:15.690594Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-n-8271a56a8b] Mar 7 01:19:15.692142 waagent[1796]: 2026-03-07T01:19:15.692074Z INFO Daemon Daemon Examine /proc/net/route for primary interface Mar 7 01:19:15.692671 waagent[1796]: 2026-03-07T01:19:15.692632Z INFO Daemon Daemon Primary interface is [eth0] Mar 7 01:19:15.710577 systemd-networkd[1329]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:19:15.710588 systemd-networkd[1329]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:19:15.710637 systemd-networkd[1329]: eth0: DHCP lease lost Mar 7 01:19:15.712812 waagent[1796]: 2026-03-07T01:19:15.712512Z INFO Daemon Daemon Create user account if not exists Mar 7 01:19:15.731784 waagent[1796]: 2026-03-07T01:19:15.714392Z INFO Daemon Daemon User core already exists, skip useradd Mar 7 01:19:15.731784 waagent[1796]: 2026-03-07T01:19:15.715494Z INFO Daemon Daemon Configure sudoer Mar 7 01:19:15.731784 waagent[1796]: 2026-03-07T01:19:15.716330Z INFO Daemon Daemon Configure sshd Mar 7 01:19:15.731784 waagent[1796]: 2026-03-07T01:19:15.717296Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Mar 7 01:19:15.731784 waagent[1796]: 2026-03-07T01:19:15.718173Z INFO Daemon Daemon Deploy ssh public key. Mar 7 01:19:15.734223 systemd-networkd[1329]: eth0: DHCPv6 lease lost Mar 7 01:19:15.764167 systemd-networkd[1329]: eth0: DHCPv4 address 10.200.8.18/24, gateway 10.200.8.1 acquired from 168.63.129.16 Mar 7 01:19:15.767890 kubelet[1811]: E0307 01:19:15.767798 1811 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:19:15.770339 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:19:15.770532 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:19:25.836536 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 7 01:19:25.842691 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:19:25.950560 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:19:25.955947 (kubelet)[1881]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:19:26.665997 kubelet[1881]: E0307 01:19:26.665932 1881 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:19:26.669669 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:19:26.669879 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:19:36.556257 chronyd[1662]: Selected source PHC0 Mar 7 01:19:36.836657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 7 01:19:36.841330 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:19:36.950281 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:19:36.950557 (kubelet)[1895]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:19:37.667751 kubelet[1895]: E0307 01:19:37.667691 1895 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:19:37.670264 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:19:37.670468 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:19:45.791545 waagent[1796]: 2026-03-07T01:19:45.791479Z INFO Daemon Daemon Provisioning complete Mar 7 01:19:45.803413 waagent[1796]: 2026-03-07T01:19:45.803348Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Mar 7 01:19:45.811895 waagent[1796]: 2026-03-07T01:19:45.804860Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Mar 7 01:19:45.811895 waagent[1796]: 2026-03-07T01:19:45.805884Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Mar 7 01:19:45.934656 waagent[1902]: 2026-03-07T01:19:45.934542Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Mar 7 01:19:45.935083 waagent[1902]: 2026-03-07T01:19:45.934722Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Mar 7 01:19:45.935083 waagent[1902]: 2026-03-07T01:19:45.934806Z INFO ExtHandler ExtHandler Python: 3.11.9 Mar 7 01:19:45.952703 waagent[1902]: 2026-03-07T01:19:45.952599Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Mar 7 01:19:45.952942 waagent[1902]: 2026-03-07T01:19:45.952890Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 7 01:19:45.953049 waagent[1902]: 2026-03-07T01:19:45.953001Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 7 01:19:45.960184 waagent[1902]: 2026-03-07T01:19:45.960112Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 7 01:19:45.968422 waagent[1902]: 2026-03-07T01:19:45.968363Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.179 Mar 7 01:19:45.968939 waagent[1902]: 2026-03-07T01:19:45.968880Z INFO ExtHandler Mar 7 01:19:45.969045 waagent[1902]: 2026-03-07T01:19:45.968982Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: e3fdaa21-d51b-4cd6-a117-3856c9037fba eTag: 2564449759819422671 source: Fabric] Mar 7 01:19:45.969369 waagent[1902]: 2026-03-07T01:19:45.969317Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 7 01:19:45.969950 waagent[1902]: 2026-03-07T01:19:45.969894Z INFO ExtHandler Mar 7 01:19:45.970024 waagent[1902]: 2026-03-07T01:19:45.969983Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Mar 7 01:19:45.973142 waagent[1902]: 2026-03-07T01:19:45.973080Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 7 01:19:46.029000 waagent[1902]: 2026-03-07T01:19:46.028900Z INFO ExtHandler Downloaded certificate {'thumbprint': '57939EE3F041BB86A5768F6A1FC0376952B49151', 'hasPrivateKey': True} Mar 7 01:19:46.029578 waagent[1902]: 2026-03-07T01:19:46.029518Z INFO ExtHandler Fetch goal state completed Mar 7 01:19:46.043829 waagent[1902]: 2026-03-07T01:19:46.043689Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1902 Mar 7 01:19:46.043956 waagent[1902]: 2026-03-07T01:19:46.043898Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Mar 7 01:19:46.045640 waagent[1902]: 2026-03-07T01:19:46.045575Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Mar 7 01:19:46.046044 waagent[1902]: 2026-03-07T01:19:46.045991Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 7 01:19:46.095050 waagent[1902]: 2026-03-07T01:19:46.095000Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 7 01:19:46.095317 waagent[1902]: 2026-03-07T01:19:46.095261Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 7 01:19:46.102299 waagent[1902]: 2026-03-07T01:19:46.102254Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 7 01:19:46.109347 systemd[1]: Reloading requested from client PID 1915 ('systemctl') (unit waagent.service)... Mar 7 01:19:46.109365 systemd[1]: Reloading... Mar 7 01:19:46.210125 zram_generator::config[1946]: No configuration found. Mar 7 01:19:46.333450 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:19:46.415519 systemd[1]: Reloading finished in 305 ms. Mar 7 01:19:46.444622 waagent[1902]: 2026-03-07T01:19:46.444130Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Mar 7 01:19:46.454494 systemd[1]: Reloading requested from client PID 2006 ('systemctl') (unit waagent.service)... Mar 7 01:19:46.454512 systemd[1]: Reloading... Mar 7 01:19:46.537178 zram_generator::config[2037]: No configuration found. Mar 7 01:19:46.670587 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:19:46.752728 systemd[1]: Reloading finished in 297 ms. Mar 7 01:19:46.778427 waagent[1902]: 2026-03-07T01:19:46.778314Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Mar 7 01:19:46.778586 waagent[1902]: 2026-03-07T01:19:46.778531Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Mar 7 01:19:46.930709 waagent[1902]: 2026-03-07T01:19:46.930560Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Mar 7 01:19:46.931260 waagent[1902]: 2026-03-07T01:19:46.931196Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Mar 7 01:19:46.932069 waagent[1902]: 2026-03-07T01:19:46.931991Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 7 01:19:46.932919 waagent[1902]: 2026-03-07T01:19:46.932856Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 7 01:19:46.933132 waagent[1902]: 2026-03-07T01:19:46.933068Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 7 01:19:46.933258 waagent[1902]: 2026-03-07T01:19:46.933218Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 7 01:19:46.933358 waagent[1902]: 2026-03-07T01:19:46.933318Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 7 01:19:46.933563 waagent[1902]: 2026-03-07T01:19:46.933488Z INFO EnvHandler ExtHandler Configure routes Mar 7 01:19:46.933851 waagent[1902]: 2026-03-07T01:19:46.933778Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 7 01:19:46.933923 waagent[1902]: 2026-03-07T01:19:46.933849Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 7 01:19:46.934285 waagent[1902]: 2026-03-07T01:19:46.934238Z INFO EnvHandler ExtHandler Gateway:None Mar 7 01:19:46.934594 waagent[1902]: 2026-03-07T01:19:46.934548Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 7 01:19:46.934851 waagent[1902]: 2026-03-07T01:19:46.934793Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 7 01:19:46.935245 waagent[1902]: 2026-03-07T01:19:46.935017Z INFO EnvHandler ExtHandler Routes:None Mar 7 01:19:46.935566 waagent[1902]: 2026-03-07T01:19:46.935439Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 7 01:19:46.935566 waagent[1902]: 2026-03-07T01:19:46.935513Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 7 01:19:46.936296 waagent[1902]: 2026-03-07T01:19:46.936246Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 7 01:19:46.936296 waagent[1902]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 7 01:19:46.936296 waagent[1902]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Mar 7 01:19:46.936296 waagent[1902]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 7 01:19:46.936296 waagent[1902]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 7 01:19:46.936296 waagent[1902]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 7 01:19:46.936296 waagent[1902]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 7 01:19:46.938156 waagent[1902]: 2026-03-07T01:19:46.937052Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 7 01:19:46.944052 waagent[1902]: 2026-03-07T01:19:46.943759Z INFO ExtHandler ExtHandler Mar 7 01:19:46.944052 waagent[1902]: 2026-03-07T01:19:46.943885Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 94a1018a-531e-4665-8d81-925dcc1188e7 correlation 23b5e3ca-64d5-4e9d-b982-bc9636eefa12 created: 2026-03-07T01:18:10.956123Z] Mar 7 01:19:46.949114 waagent[1902]: 2026-03-07T01:19:46.947510Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 7 01:19:46.949114 waagent[1902]: 2026-03-07T01:19:46.948319Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 4 ms] Mar 7 01:19:46.958254 waagent[1902]: 2026-03-07T01:19:46.958191Z INFO MonitorHandler ExtHandler Network interfaces: Mar 7 01:19:46.958254 waagent[1902]: Executing ['ip', '-a', '-o', 'link']: Mar 7 01:19:46.958254 waagent[1902]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 7 01:19:46.958254 waagent[1902]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:2b:d0:de brd ff:ff:ff:ff:ff:ff Mar 7 01:19:46.958254 waagent[1902]: 3: enP24986s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:2b:d0:de brd ff:ff:ff:ff:ff:ff\ altname enP24986p0s2 Mar 7 01:19:46.958254 waagent[1902]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 7 01:19:46.958254 waagent[1902]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 7 01:19:46.958254 waagent[1902]: 2: eth0 inet 10.200.8.18/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 7 01:19:46.958254 waagent[1902]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 7 01:19:46.958254 waagent[1902]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Mar 7 01:19:46.958254 waagent[1902]: 2: eth0 inet6 fe80::7eed:8dff:fe2b:d0de/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 7 01:19:46.992179 waagent[1902]: 2026-03-07T01:19:46.992112Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 24FBC0E0-4B79-4CA2-8223-F201E9AC4ED4;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Mar 7 01:19:47.004070 waagent[1902]: 2026-03-07T01:19:47.003998Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Mar 7 01:19:47.004070 waagent[1902]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 7 01:19:47.004070 waagent[1902]: pkts bytes target prot opt in out source destination Mar 7 01:19:47.004070 waagent[1902]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 7 01:19:47.004070 waagent[1902]: pkts bytes target prot opt in out source destination Mar 7 01:19:47.004070 waagent[1902]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 7 01:19:47.004070 waagent[1902]: pkts bytes target prot opt in out source destination Mar 7 01:19:47.004070 waagent[1902]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 7 01:19:47.004070 waagent[1902]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 7 01:19:47.004070 waagent[1902]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 7 01:19:47.007551 waagent[1902]: 2026-03-07T01:19:47.007491Z INFO EnvHandler ExtHandler Current Firewall rules: Mar 7 01:19:47.007551 waagent[1902]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 7 01:19:47.007551 waagent[1902]: pkts bytes target prot opt in out source destination Mar 7 01:19:47.007551 waagent[1902]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 7 01:19:47.007551 waagent[1902]: pkts bytes target prot opt in out source destination Mar 7 01:19:47.007551 waagent[1902]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 7 01:19:47.007551 waagent[1902]: pkts bytes target prot opt in out source destination Mar 7 01:19:47.007551 waagent[1902]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 7 01:19:47.007551 waagent[1902]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 7 01:19:47.007551 waagent[1902]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 7 01:19:47.007952 waagent[1902]: 2026-03-07T01:19:47.007807Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Mar 7 01:19:47.836560 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 7 01:19:47.842344 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:19:48.002968 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:19:48.016482 (kubelet)[2138]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:19:48.056224 kubelet[2138]: E0307 01:19:48.056166 2138 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:19:48.058875 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:19:48.059075 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:19:51.284913 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Mar 7 01:19:57.213559 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 7 01:19:57.218404 systemd[1]: Started sshd@0-10.200.8.18:22-10.200.16.10:46040.service - OpenSSH per-connection server daemon (10.200.16.10:46040). Mar 7 01:19:57.856131 sshd[2147]: Accepted publickey for core from 10.200.16.10 port 46040 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:19:57.856996 sshd[2147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:19:57.861142 systemd-logind[1672]: New session 3 of user core. Mar 7 01:19:57.868292 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 7 01:19:58.086409 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 7 01:19:58.092374 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:19:58.140173 update_engine[1673]: I20260307 01:19:58.138675 1673 update_attempter.cc:509] Updating boot flags... Mar 7 01:19:58.401086 systemd[1]: Started sshd@1-10.200.8.18:22-10.200.16.10:46046.service - OpenSSH per-connection server daemon (10.200.16.10:46046). Mar 7 01:19:58.832301 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:19:58.852796 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2174) Mar 7 01:19:58.851886 (kubelet)[2172]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:19:58.969780 kubelet[2172]: E0307 01:19:58.969732 2172 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:19:58.973237 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:19:58.973490 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:19:59.017134 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2173) Mar 7 01:19:59.027316 sshd[2157]: Accepted publickey for core from 10.200.16.10 port 46046 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:19:59.029298 sshd[2157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:19:59.040010 systemd-logind[1672]: New session 4 of user core. Mar 7 01:19:59.045290 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 7 01:19:59.137120 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2173) Mar 7 01:19:59.469940 sshd[2157]: pam_unix(sshd:session): session closed for user core Mar 7 01:19:59.473063 systemd[1]: sshd@1-10.200.8.18:22-10.200.16.10:46046.service: Deactivated successfully. Mar 7 01:19:59.475168 systemd[1]: session-4.scope: Deactivated successfully. Mar 7 01:19:59.476675 systemd-logind[1672]: Session 4 logged out. Waiting for processes to exit. Mar 7 01:19:59.477700 systemd-logind[1672]: Removed session 4. Mar 7 01:19:59.580004 systemd[1]: Started sshd@2-10.200.8.18:22-10.200.16.10:46056.service - OpenSSH per-connection server daemon (10.200.16.10:46056). Mar 7 01:20:00.203302 sshd[2267]: Accepted publickey for core from 10.200.16.10 port 46056 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:20:00.204856 sshd[2267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:20:00.208943 systemd-logind[1672]: New session 5 of user core. Mar 7 01:20:00.219352 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 7 01:20:00.641838 sshd[2267]: pam_unix(sshd:session): session closed for user core Mar 7 01:20:00.644824 systemd-logind[1672]: Session 5 logged out. Waiting for processes to exit. Mar 7 01:20:00.645084 systemd[1]: sshd@2-10.200.8.18:22-10.200.16.10:46056.service: Deactivated successfully. Mar 7 01:20:00.647384 systemd[1]: session-5.scope: Deactivated successfully. Mar 7 01:20:00.649333 systemd-logind[1672]: Removed session 5. Mar 7 01:20:00.752959 systemd[1]: Started sshd@3-10.200.8.18:22-10.200.16.10:49180.service - OpenSSH per-connection server daemon (10.200.16.10:49180). Mar 7 01:20:01.376739 sshd[2274]: Accepted publickey for core from 10.200.16.10 port 49180 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:20:01.378310 sshd[2274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:20:01.382431 systemd-logind[1672]: New session 6 of user core. Mar 7 01:20:01.393318 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 7 01:20:01.821750 sshd[2274]: pam_unix(sshd:session): session closed for user core Mar 7 01:20:01.825249 systemd[1]: sshd@3-10.200.8.18:22-10.200.16.10:49180.service: Deactivated successfully. Mar 7 01:20:01.827405 systemd[1]: session-6.scope: Deactivated successfully. Mar 7 01:20:01.828909 systemd-logind[1672]: Session 6 logged out. Waiting for processes to exit. Mar 7 01:20:01.829985 systemd-logind[1672]: Removed session 6. Mar 7 01:20:01.937429 systemd[1]: Started sshd@4-10.200.8.18:22-10.200.16.10:49190.service - OpenSSH per-connection server daemon (10.200.16.10:49190). Mar 7 01:20:02.562998 sshd[2281]: Accepted publickey for core from 10.200.16.10 port 49190 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:20:02.564535 sshd[2281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:20:02.568518 systemd-logind[1672]: New session 7 of user core. Mar 7 01:20:02.572256 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 7 01:20:02.938550 sudo[2284]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 7 01:20:02.938931 sudo[2284]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:20:02.956532 sudo[2284]: pam_unix(sudo:session): session closed for user root Mar 7 01:20:03.057647 sshd[2281]: pam_unix(sshd:session): session closed for user core Mar 7 01:20:03.061124 systemd[1]: sshd@4-10.200.8.18:22-10.200.16.10:49190.service: Deactivated successfully. Mar 7 01:20:03.063328 systemd[1]: session-7.scope: Deactivated successfully. Mar 7 01:20:03.064955 systemd-logind[1672]: Session 7 logged out. Waiting for processes to exit. Mar 7 01:20:03.066238 systemd-logind[1672]: Removed session 7. Mar 7 01:20:03.167470 systemd[1]: Started sshd@5-10.200.8.18:22-10.200.16.10:49196.service - OpenSSH per-connection server daemon (10.200.16.10:49196). Mar 7 01:20:03.795526 sshd[2289]: Accepted publickey for core from 10.200.16.10 port 49196 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:20:03.796205 sshd[2289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:20:03.801340 systemd-logind[1672]: New session 8 of user core. Mar 7 01:20:03.810294 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 7 01:20:04.138952 sudo[2293]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 7 01:20:04.139352 sudo[2293]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:20:04.142744 sudo[2293]: pam_unix(sudo:session): session closed for user root Mar 7 01:20:04.147811 sudo[2292]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 7 01:20:04.148179 sudo[2292]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:20:04.166475 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 7 01:20:04.168290 auditctl[2296]: No rules Mar 7 01:20:04.169412 systemd[1]: audit-rules.service: Deactivated successfully. Mar 7 01:20:04.169657 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 7 01:20:04.171554 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:20:04.198245 augenrules[2314]: No rules Mar 7 01:20:04.199701 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:20:04.200917 sudo[2292]: pam_unix(sudo:session): session closed for user root Mar 7 01:20:04.301941 sshd[2289]: pam_unix(sshd:session): session closed for user core Mar 7 01:20:04.305373 systemd[1]: sshd@5-10.200.8.18:22-10.200.16.10:49196.service: Deactivated successfully. Mar 7 01:20:04.307348 systemd[1]: session-8.scope: Deactivated successfully. Mar 7 01:20:04.309449 systemd-logind[1672]: Session 8 logged out. Waiting for processes to exit. Mar 7 01:20:04.310384 systemd-logind[1672]: Removed session 8. Mar 7 01:20:04.413205 systemd[1]: Started sshd@6-10.200.8.18:22-10.200.16.10:49200.service - OpenSSH per-connection server daemon (10.200.16.10:49200). Mar 7 01:20:05.037799 sshd[2322]: Accepted publickey for core from 10.200.16.10 port 49200 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:20:05.039338 sshd[2322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:20:05.044346 systemd-logind[1672]: New session 9 of user core. Mar 7 01:20:05.050295 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 7 01:20:05.381521 sudo[2325]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 7 01:20:05.381893 sudo[2325]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:20:07.361425 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 7 01:20:07.364001 (dockerd)[2340]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 7 01:20:09.086414 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 7 01:20:09.091363 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:20:09.284853 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:20:09.303517 (kubelet)[2353]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:20:09.914006 kubelet[2353]: E0307 01:20:09.913944 2353 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:20:09.916453 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:20:09.916657 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:20:10.571960 dockerd[2340]: time="2026-03-07T01:20:10.571408494Z" level=info msg="Starting up" Mar 7 01:20:11.254950 dockerd[2340]: time="2026-03-07T01:20:11.254634050Z" level=info msg="Loading containers: start." Mar 7 01:20:11.481128 kernel: Initializing XFRM netlink socket Mar 7 01:20:11.695978 systemd-networkd[1329]: docker0: Link UP Mar 7 01:20:11.720376 dockerd[2340]: time="2026-03-07T01:20:11.720326381Z" level=info msg="Loading containers: done." Mar 7 01:20:11.824669 dockerd[2340]: time="2026-03-07T01:20:11.824604223Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 7 01:20:11.824854 dockerd[2340]: time="2026-03-07T01:20:11.824743225Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 7 01:20:11.824912 dockerd[2340]: time="2026-03-07T01:20:11.824885728Z" level=info msg="Daemon has completed initialization" Mar 7 01:20:11.893750 dockerd[2340]: time="2026-03-07T01:20:11.893661110Z" level=info msg="API listen on /run/docker.sock" Mar 7 01:20:11.894126 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 7 01:20:12.451760 containerd[1696]: time="2026-03-07T01:20:12.451708196Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 7 01:20:13.248956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1581505348.mount: Deactivated successfully. Mar 7 01:20:15.051114 containerd[1696]: time="2026-03-07T01:20:15.051041318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:20:15.053618 containerd[1696]: time="2026-03-07T01:20:15.053551758Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=27696475" Mar 7 01:20:15.057467 containerd[1696]: time="2026-03-07T01:20:15.057401218Z" level=info msg="ImageCreate event name:\"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:20:15.063370 containerd[1696]: time="2026-03-07T01:20:15.063329112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:20:15.064543 containerd[1696]: time="2026-03-07T01:20:15.064349128Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"27693066\" in 2.612595231s" Mar 7 01:20:15.064543 containerd[1696]: time="2026-03-07T01:20:15.064396528Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\"" Mar 7 01:20:15.065235 containerd[1696]: time="2026-03-07T01:20:15.065211841Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 7 01:20:16.936571 containerd[1696]: time="2026-03-07T01:20:16.936512018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:20:16.939327 containerd[1696]: time="2026-03-07T01:20:16.939088054Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=21450708" Mar 7 01:20:16.942234 containerd[1696]: time="2026-03-07T01:20:16.942185197Z" level=info msg="ImageCreate event name:\"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:20:16.947126 containerd[1696]: time="2026-03-07T01:20:16.947073966Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:20:16.948275 containerd[1696]: time="2026-03-07T01:20:16.948110680Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"23142311\" in 1.882754537s" Mar 7 01:20:16.948275 containerd[1696]: time="2026-03-07T01:20:16.948154781Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\"" Mar 7 01:20:16.949284 containerd[1696]: time="2026-03-07T01:20:16.949227896Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 7 01:20:18.287198 containerd[1696]: time="2026-03-07T01:20:18.287143132Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:20:18.290978 containerd[1696]: time="2026-03-07T01:20:18.290760982Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=15548437" Mar 7 01:20:18.295067 containerd[1696]: time="2026-03-07T01:20:18.295002341Z" level=info msg="ImageCreate event name:\"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:20:18.299824 containerd[1696]: time="2026-03-07T01:20:18.299494004Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:20:18.300857 containerd[1696]: time="2026-03-07T01:20:18.300507618Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"17240058\" in 1.351241522s" Mar 7 01:20:18.300857 containerd[1696]: time="2026-03-07T01:20:18.300549818Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\"" Mar 7 01:20:18.301517 containerd[1696]: time="2026-03-07T01:20:18.301484631Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 7 01:20:19.760802 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2643410905.mount: Deactivated successfully. Mar 7 01:20:20.087035 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 7 01:20:20.098215 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:20:20.313760 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:20:20.327509 (kubelet)[2573]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:20:20.367016 kubelet[2573]: E0307 01:20:20.366819 2573 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:20:20.369507 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:20:20.369735 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:20:20.841384 containerd[1696]: time="2026-03-07T01:20:20.841332209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:20:20.844150 containerd[1696]: time="2026-03-07T01:20:20.844076348Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=25685320" Mar 7 01:20:20.849059 containerd[1696]: time="2026-03-07T01:20:20.848969116Z" level=info msg="ImageCreate event name:\"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:20:20.852907 containerd[1696]: time="2026-03-07T01:20:20.852852770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:20:20.853621 containerd[1696]: time="2026-03-07T01:20:20.853456878Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"25684331\" in 2.551930846s" Mar 7 01:20:20.853621 containerd[1696]: time="2026-03-07T01:20:20.853497379Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\"" Mar 7 01:20:20.854311 containerd[1696]: time="2026-03-07T01:20:20.854280390Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 7 01:20:21.452372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3641103369.mount: Deactivated successfully. Mar 7 01:20:23.074782 containerd[1696]: time="2026-03-07T01:20:23.074726419Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:20:23.077060 containerd[1696]: time="2026-03-07T01:20:23.076992350Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23556550" Mar 7 01:20:23.081251 containerd[1696]: time="2026-03-07T01:20:23.081181409Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:20:23.086011 containerd[1696]: time="2026-03-07T01:20:23.085949675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:20:23.087675 containerd[1696]: time="2026-03-07T01:20:23.087053291Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 2.2327397s" Mar 7 01:20:23.087675 containerd[1696]: time="2026-03-07T01:20:23.087116491Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Mar 7 01:20:23.088303 containerd[1696]: time="2026-03-07T01:20:23.088236207Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 7 01:20:23.635691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3929996789.mount: Deactivated successfully. Mar 7 01:20:23.656928 containerd[1696]: time="2026-03-07T01:20:23.656871028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:20:23.659267 containerd[1696]: time="2026-03-07T01:20:23.659196260Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321226" Mar 7 01:20:23.661900 containerd[1696]: time="2026-03-07T01:20:23.661840597Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:20:23.666206 containerd[1696]: time="2026-03-07T01:20:23.666148057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:20:23.667042 containerd[1696]: time="2026-03-07T01:20:23.666860067Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 578.58806ms" Mar 7 01:20:23.667042 containerd[1696]: time="2026-03-07T01:20:23.666900467Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 7 01:20:23.667663 containerd[1696]: time="2026-03-07T01:20:23.667387674Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 7 01:20:24.318944 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1238154513.mount: Deactivated successfully. Mar 7 01:20:25.710502 containerd[1696]: time="2026-03-07T01:20:25.710439992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:20:25.713408 containerd[1696]: time="2026-03-07T01:20:25.713197530Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23630330" Mar 7 01:20:25.716455 containerd[1696]: time="2026-03-07T01:20:25.716384674Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:20:25.720648 containerd[1696]: time="2026-03-07T01:20:25.720510532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:20:25.721750 containerd[1696]: time="2026-03-07T01:20:25.721469045Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 2.054046871s" Mar 7 01:20:25.721750 containerd[1696]: time="2026-03-07T01:20:25.721511546Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Mar 7 01:20:27.049907 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:20:27.056413 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:20:27.101289 systemd[1]: Reloading requested from client PID 2729 ('systemctl') (unit session-9.scope)... Mar 7 01:20:27.101502 systemd[1]: Reloading... Mar 7 01:20:27.208150 zram_generator::config[2768]: No configuration found. Mar 7 01:20:27.345552 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:20:27.452085 systemd[1]: Reloading finished in 349 ms. Mar 7 01:20:27.500510 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:20:27.505381 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:20:27.507767 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 01:20:27.508014 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:20:27.513413 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:20:27.844358 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:20:27.860445 (kubelet)[2841]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:20:27.896198 kubelet[2841]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:20:28.541557 kubelet[2841]: I0307 01:20:28.541337 2841 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 7 01:20:28.541557 kubelet[2841]: I0307 01:20:28.541417 2841 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:20:28.541557 kubelet[2841]: I0307 01:20:28.541437 2841 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 7 01:20:28.541557 kubelet[2841]: I0307 01:20:28.541444 2841 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:20:28.541817 kubelet[2841]: I0307 01:20:28.541780 2841 server.go:951] "Client rotation is on, will bootstrap in background" Mar 7 01:20:28.602063 kubelet[2841]: E0307 01:20:28.602020 2841 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.18:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.18:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:20:28.605598 kubelet[2841]: I0307 01:20:28.605398 2841 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:20:28.609085 kubelet[2841]: E0307 01:20:28.609047 2841 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:20:28.609223 kubelet[2841]: I0307 01:20:28.609120 2841 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 7 01:20:28.613645 kubelet[2841]: I0307 01:20:28.613067 2841 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 7 01:20:28.614165 kubelet[2841]: I0307 01:20:28.614083 2841 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:20:28.614388 kubelet[2841]: I0307 01:20:28.614167 2841 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-8271a56a8b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 01:20:28.614555 kubelet[2841]: I0307 01:20:28.614398 2841 topology_manager.go:143] "Creating topology manager with none policy" Mar 7 01:20:28.614555 kubelet[2841]: I0307 01:20:28.614410 2841 container_manager_linux.go:308] "Creating device plugin manager" Mar 7 01:20:28.614555 kubelet[2841]: I0307 01:20:28.614527 2841 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 7 01:20:28.621942 kubelet[2841]: I0307 01:20:28.621917 2841 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 7 01:20:28.622157 kubelet[2841]: I0307 01:20:28.622140 2841 kubelet.go:482] "Attempting to sync node with API server" Mar 7 01:20:28.622239 kubelet[2841]: I0307 01:20:28.622160 2841 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:20:28.622239 kubelet[2841]: I0307 01:20:28.622196 2841 kubelet.go:394] "Adding apiserver pod source" Mar 7 01:20:28.622239 kubelet[2841]: I0307 01:20:28.622214 2841 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:20:28.626693 kubelet[2841]: I0307 01:20:28.626119 2841 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:20:28.629443 kubelet[2841]: I0307 01:20:28.628632 2841 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:20:28.629443 kubelet[2841]: I0307 01:20:28.628685 2841 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 7 01:20:28.629443 kubelet[2841]: W0307 01:20:28.628758 2841 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 7 01:20:28.632479 kubelet[2841]: I0307 01:20:28.631943 2841 server.go:1257] "Started kubelet" Mar 7 01:20:28.634171 kubelet[2841]: I0307 01:20:28.634141 2841 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:20:28.639051 kubelet[2841]: I0307 01:20:28.638978 2841 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:20:28.639176 kubelet[2841]: I0307 01:20:28.639087 2841 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 7 01:20:28.639499 kubelet[2841]: I0307 01:20:28.639473 2841 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:20:28.639767 kubelet[2841]: I0307 01:20:28.639750 2841 server.go:317] "Adding debug handlers to kubelet server" Mar 7 01:20:28.644134 kubelet[2841]: I0307 01:20:28.644115 2841 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 7 01:20:28.647932 kubelet[2841]: E0307 01:20:28.645780 2841 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.18:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.18:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-8271a56a8b.189a6a6f13eccd61 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-8271a56a8b,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-8271a56a8b,},FirstTimestamp:2026-03-07 01:20:28.631911777 +0000 UTC m=+0.767963170,LastTimestamp:2026-03-07 01:20:28.631911777 +0000 UTC m=+0.767963170,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-8271a56a8b,}" Mar 7 01:20:28.650850 kubelet[2841]: I0307 01:20:28.650820 2841 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 7 01:20:28.650974 kubelet[2841]: I0307 01:20:28.650318 2841 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:20:28.652253 kubelet[2841]: E0307 01:20:28.651057 2841 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-8271a56a8b\" not found" Mar 7 01:20:28.654104 kubelet[2841]: I0307 01:20:28.654068 2841 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:20:28.655029 kubelet[2841]: I0307 01:20:28.655007 2841 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:20:28.655290 kubelet[2841]: I0307 01:20:28.654300 2841 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 7 01:20:28.655967 kubelet[2841]: I0307 01:20:28.654363 2841 reconciler.go:29] "Reconciler: start to sync state" Mar 7 01:20:28.656074 kubelet[2841]: E0307 01:20:28.655911 2841 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-8271a56a8b?timeout=10s\": dial tcp 10.200.8.18:6443: connect: connection refused" interval="200ms" Mar 7 01:20:28.657421 kubelet[2841]: I0307 01:20:28.657402 2841 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:20:28.659315 kubelet[2841]: E0307 01:20:28.659277 2841 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 01:20:28.661792 kubelet[2841]: I0307 01:20:28.661739 2841 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 7 01:20:28.689014 kubelet[2841]: I0307 01:20:28.688964 2841 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 7 01:20:28.689014 kubelet[2841]: I0307 01:20:28.689005 2841 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 7 01:20:28.689381 kubelet[2841]: I0307 01:20:28.689038 2841 kubelet.go:2501] "Starting kubelet main sync loop" Mar 7 01:20:28.689381 kubelet[2841]: E0307 01:20:28.689126 2841 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:20:28.695030 kubelet[2841]: I0307 01:20:28.694767 2841 cpu_manager.go:225] "Starting" policy="none" Mar 7 01:20:28.695030 kubelet[2841]: I0307 01:20:28.694785 2841 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 7 01:20:28.695030 kubelet[2841]: I0307 01:20:28.694806 2841 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 7 01:20:28.705534 kubelet[2841]: I0307 01:20:28.705503 2841 policy_none.go:50] "Start" Mar 7 01:20:28.705534 kubelet[2841]: I0307 01:20:28.705531 2841 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 7 01:20:28.705712 kubelet[2841]: I0307 01:20:28.705546 2841 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 7 01:20:28.712585 kubelet[2841]: I0307 01:20:28.712549 2841 policy_none.go:44] "Start" Mar 7 01:20:28.716919 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 7 01:20:28.726281 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 7 01:20:28.729813 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 7 01:20:28.740060 kubelet[2841]: E0307 01:20:28.740026 2841 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:20:28.740691 kubelet[2841]: I0307 01:20:28.740487 2841 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 7 01:20:28.740691 kubelet[2841]: I0307 01:20:28.740510 2841 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:20:28.740840 kubelet[2841]: I0307 01:20:28.740778 2841 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 7 01:20:28.742825 kubelet[2841]: E0307 01:20:28.742472 2841 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:20:28.742825 kubelet[2841]: E0307 01:20:28.742521 2841 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-8271a56a8b\" not found" Mar 7 01:20:28.842671 kubelet[2841]: I0307 01:20:28.842639 2841 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:28.843045 kubelet[2841]: E0307 01:20:28.843015 2841 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.200.8.18:6443/api/v1/nodes\": dial tcp 10.200.8.18:6443: connect: connection refused" node="ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:28.856715 kubelet[2841]: E0307 01:20:28.856671 2841 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-8271a56a8b?timeout=10s\": dial tcp 10.200.8.18:6443: connect: connection refused" interval="400ms" Mar 7 01:20:28.857937 kubelet[2841]: I0307 01:20:28.857867 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7f498f99c68c4b39502d7e063118a861-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-8271a56a8b\" (UID: \"7f498f99c68c4b39502d7e063118a861\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:28.857937 kubelet[2841]: I0307 01:20:28.857926 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7f498f99c68c4b39502d7e063118a861-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-8271a56a8b\" (UID: \"7f498f99c68c4b39502d7e063118a861\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:28.858114 kubelet[2841]: I0307 01:20:28.857960 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7f498f99c68c4b39502d7e063118a861-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-8271a56a8b\" (UID: \"7f498f99c68c4b39502d7e063118a861\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:28.891147 systemd[1]: Created slice kubepods-burstable-pod7f498f99c68c4b39502d7e063118a861.slice - libcontainer container kubepods-burstable-pod7f498f99c68c4b39502d7e063118a861.slice. Mar 7 01:20:28.896737 kubelet[2841]: E0307 01:20:28.896704 2841 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-8271a56a8b\" not found" node="ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:28.958526 kubelet[2841]: I0307 01:20:28.958282 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4056c3137fa4f73cd17937ae13e3915b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-8271a56a8b\" (UID: \"4056c3137fa4f73cd17937ae13e3915b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:28.958526 kubelet[2841]: I0307 01:20:28.958403 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4056c3137fa4f73cd17937ae13e3915b-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-8271a56a8b\" (UID: \"4056c3137fa4f73cd17937ae13e3915b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:28.958526 kubelet[2841]: I0307 01:20:28.958430 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4056c3137fa4f73cd17937ae13e3915b-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-8271a56a8b\" (UID: \"4056c3137fa4f73cd17937ae13e3915b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:28.958526 kubelet[2841]: I0307 01:20:28.958452 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4056c3137fa4f73cd17937ae13e3915b-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-8271a56a8b\" (UID: \"4056c3137fa4f73cd17937ae13e3915b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:28.958526 kubelet[2841]: I0307 01:20:28.958475 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4056c3137fa4f73cd17937ae13e3915b-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-8271a56a8b\" (UID: \"4056c3137fa4f73cd17937ae13e3915b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:29.045880 kubelet[2841]: I0307 01:20:29.045691 2841 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:29.046122 kubelet[2841]: E0307 01:20:29.046073 2841 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.200.8.18:6443/api/v1/nodes\": dial tcp 10.200.8.18:6443: connect: connection refused" node="ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:29.052268 systemd[1]: Created slice kubepods-burstable-pod4056c3137fa4f73cd17937ae13e3915b.slice - libcontainer container kubepods-burstable-pod4056c3137fa4f73cd17937ae13e3915b.slice. Mar 7 01:20:29.091964 kubelet[2841]: E0307 01:20:29.053995 2841 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-8271a56a8b\" not found" node="ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:29.091964 kubelet[2841]: I0307 01:20:29.059313 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2ac7dc0f17b4289fb0bfd480e1c8ad53-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-8271a56a8b\" (UID: \"2ac7dc0f17b4289fb0bfd480e1c8ad53\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:29.099382 containerd[1696]: time="2026-03-07T01:20:29.099246769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-8271a56a8b,Uid:4056c3137fa4f73cd17937ae13e3915b,Namespace:kube-system,Attempt:0,}" Mar 7 01:20:29.106310 systemd[1]: Created slice kubepods-burstable-pod2ac7dc0f17b4289fb0bfd480e1c8ad53.slice - libcontainer container kubepods-burstable-pod2ac7dc0f17b4289fb0bfd480e1c8ad53.slice. Mar 7 01:20:29.109238 kubelet[2841]: E0307 01:20:29.109004 2841 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-8271a56a8b\" not found" node="ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:29.257995 kubelet[2841]: E0307 01:20:29.257945 2841 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-8271a56a8b?timeout=10s\": dial tcp 10.200.8.18:6443: connect: connection refused" interval="800ms" Mar 7 01:20:29.261782 containerd[1696]: time="2026-03-07T01:20:29.261738526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-8271a56a8b,Uid:7f498f99c68c4b39502d7e063118a861,Namespace:kube-system,Attempt:0,}" Mar 7 01:20:29.417325 containerd[1696]: time="2026-03-07T01:20:29.417182386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-8271a56a8b,Uid:2ac7dc0f17b4289fb0bfd480e1c8ad53,Namespace:kube-system,Attempt:0,}" Mar 7 01:20:29.448451 kubelet[2841]: I0307 01:20:29.448417 2841 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:29.448813 kubelet[2841]: E0307 01:20:29.448777 2841 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.200.8.18:6443/api/v1/nodes\": dial tcp 10.200.8.18:6443: connect: connection refused" node="ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:29.989118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount645022736.mount: Deactivated successfully. Mar 7 01:20:30.017820 containerd[1696]: time="2026-03-07T01:20:30.017754429Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:20:30.020616 containerd[1696]: time="2026-03-07T01:20:30.020556968Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Mar 7 01:20:30.023232 containerd[1696]: time="2026-03-07T01:20:30.023186104Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:20:30.026249 containerd[1696]: time="2026-03-07T01:20:30.026211346Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:20:30.029108 containerd[1696]: time="2026-03-07T01:20:30.029055186Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:20:30.032071 containerd[1696]: time="2026-03-07T01:20:30.032032227Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:20:30.038049 containerd[1696]: time="2026-03-07T01:20:30.037771207Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:20:30.042018 containerd[1696]: time="2026-03-07T01:20:30.041983565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:20:30.042758 containerd[1696]: time="2026-03-07T01:20:30.042720275Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 780.898948ms" Mar 7 01:20:30.044878 containerd[1696]: time="2026-03-07T01:20:30.044840505Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 627.579218ms" Mar 7 01:20:30.045459 containerd[1696]: time="2026-03-07T01:20:30.045426313Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 946.091843ms" Mar 7 01:20:30.058578 kubelet[2841]: E0307 01:20:30.058532 2841 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-8271a56a8b?timeout=10s\": dial tcp 10.200.8.18:6443: connect: connection refused" interval="1.6s" Mar 7 01:20:30.254875 kubelet[2841]: I0307 01:20:30.254754 2841 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:30.255914 kubelet[2841]: E0307 01:20:30.255871 2841 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.200.8.18:6443/api/v1/nodes\": dial tcp 10.200.8.18:6443: connect: connection refused" node="ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:30.311542 containerd[1696]: time="2026-03-07T01:20:30.310019489Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:20:30.311542 containerd[1696]: time="2026-03-07T01:20:30.310090490Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:20:30.311542 containerd[1696]: time="2026-03-07T01:20:30.310182691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:20:30.311542 containerd[1696]: time="2026-03-07T01:20:30.310316893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:20:30.317298 containerd[1696]: time="2026-03-07T01:20:30.317208089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:20:30.318007 containerd[1696]: time="2026-03-07T01:20:30.317328790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:20:30.318007 containerd[1696]: time="2026-03-07T01:20:30.317354491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:20:30.318007 containerd[1696]: time="2026-03-07T01:20:30.317446392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:20:30.319552 containerd[1696]: time="2026-03-07T01:20:30.319049914Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:20:30.319552 containerd[1696]: time="2026-03-07T01:20:30.319246517Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:20:30.319552 containerd[1696]: time="2026-03-07T01:20:30.319270017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:20:30.319552 containerd[1696]: time="2026-03-07T01:20:30.319362819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:20:30.349311 systemd[1]: Started cri-containerd-4c53a4c57fee39ae5988455efbd5fc5b4693a826b827764737483bdd3f043051.scope - libcontainer container 4c53a4c57fee39ae5988455efbd5fc5b4693a826b827764737483bdd3f043051. Mar 7 01:20:30.360408 systemd[1]: Started cri-containerd-6e27efd0e56baa3c42e978f0c94d161701cd5fb4ce53e46a8c30dd51df71aec3.scope - libcontainer container 6e27efd0e56baa3c42e978f0c94d161701cd5fb4ce53e46a8c30dd51df71aec3. Mar 7 01:20:30.365343 systemd[1]: Started cri-containerd-d6bf3dcaeb45d9a4a09d63e573655c6a9b09377c65ed73ec472a23b4efc6cded.scope - libcontainer container d6bf3dcaeb45d9a4a09d63e573655c6a9b09377c65ed73ec472a23b4efc6cded. Mar 7 01:20:30.432995 containerd[1696]: time="2026-03-07T01:20:30.432863495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-8271a56a8b,Uid:7f498f99c68c4b39502d7e063118a861,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6bf3dcaeb45d9a4a09d63e573655c6a9b09377c65ed73ec472a23b4efc6cded\"" Mar 7 01:20:30.443149 containerd[1696]: time="2026-03-07T01:20:30.443110638Z" level=info msg="CreateContainer within sandbox \"d6bf3dcaeb45d9a4a09d63e573655c6a9b09377c65ed73ec472a23b4efc6cded\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 7 01:20:30.450472 containerd[1696]: time="2026-03-07T01:20:30.450262637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-8271a56a8b,Uid:4056c3137fa4f73cd17937ae13e3915b,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c53a4c57fee39ae5988455efbd5fc5b4693a826b827764737483bdd3f043051\"" Mar 7 01:20:30.459894 containerd[1696]: time="2026-03-07T01:20:30.459854270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-8271a56a8b,Uid:2ac7dc0f17b4289fb0bfd480e1c8ad53,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e27efd0e56baa3c42e978f0c94d161701cd5fb4ce53e46a8c30dd51df71aec3\"" Mar 7 01:20:30.468084 containerd[1696]: time="2026-03-07T01:20:30.468042884Z" level=info msg="CreateContainer within sandbox \"4c53a4c57fee39ae5988455efbd5fc5b4693a826b827764737483bdd3f043051\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 7 01:20:30.473136 containerd[1696]: time="2026-03-07T01:20:30.473085754Z" level=info msg="CreateContainer within sandbox \"6e27efd0e56baa3c42e978f0c94d161701cd5fb4ce53e46a8c30dd51df71aec3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 7 01:20:30.516766 containerd[1696]: time="2026-03-07T01:20:30.516633459Z" level=info msg="CreateContainer within sandbox \"d6bf3dcaeb45d9a4a09d63e573655c6a9b09377c65ed73ec472a23b4efc6cded\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d6d26fe4735a23f4b945f1ce35b631758a3017492780c917fc9f7218c7033663\"" Mar 7 01:20:30.517574 containerd[1696]: time="2026-03-07T01:20:30.517536372Z" level=info msg="StartContainer for \"d6d26fe4735a23f4b945f1ce35b631758a3017492780c917fc9f7218c7033663\"" Mar 7 01:20:30.535089 containerd[1696]: time="2026-03-07T01:20:30.535039015Z" level=info msg="CreateContainer within sandbox \"4c53a4c57fee39ae5988455efbd5fc5b4693a826b827764737483bdd3f043051\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"06eb276a4c3d81a3166a80ac5d841dc18bbcbf09d997a49deab981cdaf765727\"" Mar 7 01:20:30.536186 containerd[1696]: time="2026-03-07T01:20:30.535960727Z" level=info msg="StartContainer for \"06eb276a4c3d81a3166a80ac5d841dc18bbcbf09d997a49deab981cdaf765727\"" Mar 7 01:20:30.549567 containerd[1696]: time="2026-03-07T01:20:30.549526016Z" level=info msg="CreateContainer within sandbox \"6e27efd0e56baa3c42e978f0c94d161701cd5fb4ce53e46a8c30dd51df71aec3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a3e9a1cb9be57ffe8748aac87f6943854ddd183e33946dfe4e4d421c67cdec30\"" Mar 7 01:20:30.549571 systemd[1]: Started cri-containerd-d6d26fe4735a23f4b945f1ce35b631758a3017492780c917fc9f7218c7033663.scope - libcontainer container d6d26fe4735a23f4b945f1ce35b631758a3017492780c917fc9f7218c7033663. Mar 7 01:20:30.551170 containerd[1696]: time="2026-03-07T01:20:30.551136438Z" level=info msg="StartContainer for \"a3e9a1cb9be57ffe8748aac87f6943854ddd183e33946dfe4e4d421c67cdec30\"" Mar 7 01:20:30.588575 systemd[1]: Started cri-containerd-06eb276a4c3d81a3166a80ac5d841dc18bbcbf09d997a49deab981cdaf765727.scope - libcontainer container 06eb276a4c3d81a3166a80ac5d841dc18bbcbf09d997a49deab981cdaf765727. Mar 7 01:20:30.597292 systemd[1]: Started cri-containerd-a3e9a1cb9be57ffe8748aac87f6943854ddd183e33946dfe4e4d421c67cdec30.scope - libcontainer container a3e9a1cb9be57ffe8748aac87f6943854ddd183e33946dfe4e4d421c67cdec30. Mar 7 01:20:30.647988 containerd[1696]: time="2026-03-07T01:20:30.646741666Z" level=info msg="StartContainer for \"d6d26fe4735a23f4b945f1ce35b631758a3017492780c917fc9f7218c7033663\" returns successfully" Mar 7 01:20:30.677056 containerd[1696]: time="2026-03-07T01:20:30.676861385Z" level=info msg="StartContainer for \"06eb276a4c3d81a3166a80ac5d841dc18bbcbf09d997a49deab981cdaf765727\" returns successfully" Mar 7 01:20:30.706177 containerd[1696]: time="2026-03-07T01:20:30.706021490Z" level=info msg="StartContainer for \"a3e9a1cb9be57ffe8748aac87f6943854ddd183e33946dfe4e4d421c67cdec30\" returns successfully" Mar 7 01:20:30.711913 kubelet[2841]: E0307 01:20:30.711596 2841 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-8271a56a8b\" not found" node="ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:30.721636 kubelet[2841]: E0307 01:20:30.721393 2841 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-8271a56a8b\" not found" node="ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:30.748487 kubelet[2841]: E0307 01:20:30.748437 2841 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.18:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.18:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:20:31.724951 kubelet[2841]: E0307 01:20:31.724893 2841 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-8271a56a8b\" not found" node="ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:31.727145 kubelet[2841]: E0307 01:20:31.725823 2841 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-8271a56a8b\" not found" node="ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:31.727145 kubelet[2841]: E0307 01:20:31.726619 2841 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-8271a56a8b\" not found" node="ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:31.858353 kubelet[2841]: I0307 01:20:31.858309 2841 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:32.281748 kubelet[2841]: E0307 01:20:32.281707 2841 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-8271a56a8b\" not found" node="ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:32.380586 kubelet[2841]: I0307 01:20:32.380458 2841 kubelet_node_status.go:77] "Successfully registered node" node="ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:32.380586 kubelet[2841]: E0307 01:20:32.380504 2841 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"ci-4081.3.6-n-8271a56a8b\": node \"ci-4081.3.6-n-8271a56a8b\" not found" Mar 7 01:20:32.452525 kubelet[2841]: I0307 01:20:32.452475 2841 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:32.527767 kubelet[2841]: E0307 01:20:32.527492 2841 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-8271a56a8b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:32.527767 kubelet[2841]: I0307 01:20:32.527532 2841 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:32.533367 kubelet[2841]: E0307 01:20:32.533230 2841 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-8271a56a8b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:32.533367 kubelet[2841]: I0307 01:20:32.533273 2841 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:32.546175 kubelet[2841]: E0307 01:20:32.546129 2841 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-8271a56a8b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:32.627947 kubelet[2841]: I0307 01:20:32.627697 2841 apiserver.go:52] "Watching apiserver" Mar 7 01:20:32.656550 kubelet[2841]: I0307 01:20:32.656476 2841 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 7 01:20:32.724584 kubelet[2841]: I0307 01:20:32.724550 2841 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:32.726479 kubelet[2841]: E0307 01:20:32.726444 2841 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-8271a56a8b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:33.730205 kubelet[2841]: I0307 01:20:33.728279 2841 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:33.740865 kubelet[2841]: I0307 01:20:33.740820 2841 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 7 01:20:34.491808 systemd[1]: Reloading requested from client PID 3119 ('systemctl') (unit session-9.scope)... Mar 7 01:20:34.491825 systemd[1]: Reloading... Mar 7 01:20:34.615126 zram_generator::config[3159]: No configuration found. Mar 7 01:20:34.737873 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:20:34.850227 systemd[1]: Reloading finished in 357 ms. Mar 7 01:20:34.896678 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:20:34.908712 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 01:20:34.908992 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:20:34.914418 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:20:37.054717 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:20:37.066468 (kubelet)[3226]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:20:37.116421 kubelet[3226]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:20:37.123757 kubelet[3226]: I0307 01:20:37.123705 3226 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 7 01:20:37.123757 kubelet[3226]: I0307 01:20:37.123744 3226 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:20:37.123757 kubelet[3226]: I0307 01:20:37.123763 3226 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 7 01:20:37.123757 kubelet[3226]: I0307 01:20:37.123769 3226 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:20:37.124127 kubelet[3226]: I0307 01:20:37.124090 3226 server.go:951] "Client rotation is on, will bootstrap in background" Mar 7 01:20:37.125366 kubelet[3226]: I0307 01:20:37.125331 3226 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 7 01:20:37.128861 kubelet[3226]: I0307 01:20:37.128247 3226 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:20:37.134561 kubelet[3226]: E0307 01:20:37.134511 3226 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:20:37.134696 kubelet[3226]: I0307 01:20:37.134592 3226 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 7 01:20:37.138312 kubelet[3226]: I0307 01:20:37.138283 3226 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 7 01:20:37.138537 kubelet[3226]: I0307 01:20:37.138492 3226 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:20:37.138707 kubelet[3226]: I0307 01:20:37.138536 3226 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-8271a56a8b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 01:20:37.138851 kubelet[3226]: I0307 01:20:37.138715 3226 topology_manager.go:143] "Creating topology manager with none policy" Mar 7 01:20:37.138851 kubelet[3226]: I0307 01:20:37.138728 3226 container_manager_linux.go:308] "Creating device plugin manager" Mar 7 01:20:37.138851 kubelet[3226]: I0307 01:20:37.138754 3226 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 7 01:20:37.139075 kubelet[3226]: I0307 01:20:37.139013 3226 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 7 01:20:37.142189 kubelet[3226]: I0307 01:20:37.139315 3226 kubelet.go:482] "Attempting to sync node with API server" Mar 7 01:20:37.142189 kubelet[3226]: I0307 01:20:37.139346 3226 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:20:37.142189 kubelet[3226]: I0307 01:20:37.139373 3226 kubelet.go:394] "Adding apiserver pod source" Mar 7 01:20:37.142189 kubelet[3226]: I0307 01:20:37.139400 3226 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:20:37.143220 kubelet[3226]: I0307 01:20:37.143181 3226 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:20:37.145538 kubelet[3226]: I0307 01:20:37.144455 3226 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:20:37.145836 kubelet[3226]: I0307 01:20:37.145744 3226 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 7 01:20:37.149890 kubelet[3226]: I0307 01:20:37.149874 3226 server.go:1257] "Started kubelet" Mar 7 01:20:37.153736 kubelet[3226]: I0307 01:20:37.153623 3226 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 7 01:20:37.167809 kubelet[3226]: I0307 01:20:37.167536 3226 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:20:37.168488 kubelet[3226]: I0307 01:20:37.168431 3226 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:20:37.168941 kubelet[3226]: I0307 01:20:37.168625 3226 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 7 01:20:37.168941 kubelet[3226]: I0307 01:20:37.168882 3226 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:20:37.172445 kubelet[3226]: I0307 01:20:37.172404 3226 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:20:37.172930 kubelet[3226]: I0307 01:20:37.172894 3226 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 7 01:20:37.175203 kubelet[3226]: I0307 01:20:37.175180 3226 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 7 01:20:37.175363 kubelet[3226]: I0307 01:20:37.175325 3226 reconciler.go:29] "Reconciler: start to sync state" Mar 7 01:20:37.180068 kubelet[3226]: I0307 01:20:37.178997 3226 server.go:317] "Adding debug handlers to kubelet server" Mar 7 01:20:37.182283 kubelet[3226]: I0307 01:20:37.182255 3226 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:20:37.182700 kubelet[3226]: I0307 01:20:37.182667 3226 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:20:37.193643 kubelet[3226]: I0307 01:20:37.192237 3226 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 7 01:20:37.196819 kubelet[3226]: I0307 01:20:37.196794 3226 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 7 01:20:37.196819 kubelet[3226]: I0307 01:20:37.196820 3226 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 7 01:20:37.196976 kubelet[3226]: I0307 01:20:37.196852 3226 kubelet.go:2501] "Starting kubelet main sync loop" Mar 7 01:20:37.196976 kubelet[3226]: E0307 01:20:37.196927 3226 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:20:37.202993 kubelet[3226]: I0307 01:20:37.202968 3226 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:20:37.248010 kubelet[3226]: I0307 01:20:37.247979 3226 cpu_manager.go:225] "Starting" policy="none" Mar 7 01:20:37.248010 kubelet[3226]: I0307 01:20:37.247998 3226 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 7 01:20:37.248010 kubelet[3226]: I0307 01:20:37.248020 3226 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 7 01:20:37.248269 kubelet[3226]: I0307 01:20:37.248232 3226 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 7 01:20:37.248269 kubelet[3226]: I0307 01:20:37.248250 3226 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 7 01:20:37.248452 kubelet[3226]: I0307 01:20:37.248274 3226 policy_none.go:50] "Start" Mar 7 01:20:37.248452 kubelet[3226]: I0307 01:20:37.248285 3226 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 7 01:20:37.248452 kubelet[3226]: I0307 01:20:37.248298 3226 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 7 01:20:37.248452 kubelet[3226]: I0307 01:20:37.248445 3226 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 7 01:20:37.249388 kubelet[3226]: I0307 01:20:37.248460 3226 policy_none.go:44] "Start" Mar 7 01:20:37.253702 kubelet[3226]: E0307 01:20:37.253680 3226 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:20:37.254323 kubelet[3226]: I0307 01:20:37.254024 3226 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 7 01:20:37.254323 kubelet[3226]: I0307 01:20:37.254062 3226 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:20:37.254579 kubelet[3226]: I0307 01:20:37.254489 3226 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 7 01:20:37.260160 kubelet[3226]: E0307 01:20:37.259374 3226 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:20:37.298041 kubelet[3226]: I0307 01:20:37.298001 3226 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:37.298666 kubelet[3226]: I0307 01:20:37.298638 3226 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:37.299675 kubelet[3226]: I0307 01:20:37.299294 3226 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:37.323063 kubelet[3226]: I0307 01:20:37.322769 3226 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 7 01:20:37.324679 kubelet[3226]: E0307 01:20:37.324227 3226 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-8271a56a8b\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:37.324679 kubelet[3226]: I0307 01:20:37.322891 3226 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 7 01:20:37.324679 kubelet[3226]: I0307 01:20:37.323532 3226 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 7 01:20:37.364721 kubelet[3226]: I0307 01:20:37.364575 3226 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:37.376118 kubelet[3226]: I0307 01:20:37.376069 3226 kubelet_node_status.go:123] "Node was previously registered" node="ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:37.376248 kubelet[3226]: I0307 01:20:37.376179 3226 kubelet_node_status.go:77] "Successfully registered node" node="ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:37.376551 kubelet[3226]: I0307 01:20:37.376511 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4056c3137fa4f73cd17937ae13e3915b-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-8271a56a8b\" (UID: \"4056c3137fa4f73cd17937ae13e3915b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:37.376655 kubelet[3226]: I0307 01:20:37.376565 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2ac7dc0f17b4289fb0bfd480e1c8ad53-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-8271a56a8b\" (UID: \"2ac7dc0f17b4289fb0bfd480e1c8ad53\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:37.376655 kubelet[3226]: I0307 01:20:37.376587 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7f498f99c68c4b39502d7e063118a861-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-8271a56a8b\" (UID: \"7f498f99c68c4b39502d7e063118a861\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:37.376655 kubelet[3226]: I0307 01:20:37.376609 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7f498f99c68c4b39502d7e063118a861-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-8271a56a8b\" (UID: \"7f498f99c68c4b39502d7e063118a861\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:37.376655 kubelet[3226]: I0307 01:20:37.376630 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7f498f99c68c4b39502d7e063118a861-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-8271a56a8b\" (UID: \"7f498f99c68c4b39502d7e063118a861\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:37.376655 kubelet[3226]: I0307 01:20:37.376652 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4056c3137fa4f73cd17937ae13e3915b-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-8271a56a8b\" (UID: \"4056c3137fa4f73cd17937ae13e3915b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:37.376842 kubelet[3226]: I0307 01:20:37.376677 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4056c3137fa4f73cd17937ae13e3915b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-8271a56a8b\" (UID: \"4056c3137fa4f73cd17937ae13e3915b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:37.376842 kubelet[3226]: I0307 01:20:37.376702 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4056c3137fa4f73cd17937ae13e3915b-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-8271a56a8b\" (UID: \"4056c3137fa4f73cd17937ae13e3915b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:37.376842 kubelet[3226]: I0307 01:20:37.376723 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4056c3137fa4f73cd17937ae13e3915b-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-8271a56a8b\" (UID: \"4056c3137fa4f73cd17937ae13e3915b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:38.141429 kubelet[3226]: I0307 01:20:38.141175 3226 apiserver.go:52] "Watching apiserver" Mar 7 01:20:38.175925 kubelet[3226]: I0307 01:20:38.175880 3226 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 7 01:20:38.229207 kubelet[3226]: I0307 01:20:38.229172 3226 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:38.235151 kubelet[3226]: I0307 01:20:38.234641 3226 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8271a56a8b" podStartSLOduration=1.2346226009999999 podStartE2EDuration="1.234622601s" podCreationTimestamp="2026-03-07 01:20:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:20:38.205886515 +0000 UTC m=+1.134138977" watchObservedRunningTime="2026-03-07 01:20:38.234622601 +0000 UTC m=+1.162875063" Mar 7 01:20:38.244191 kubelet[3226]: I0307 01:20:38.243918 3226 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 7 01:20:38.244191 kubelet[3226]: E0307 01:20:38.243985 3226 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-8271a56a8b\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-8271a56a8b" Mar 7 01:20:38.269942 kubelet[3226]: I0307 01:20:38.269322 3226 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-8271a56a8b" podStartSLOduration=1.269303468 podStartE2EDuration="1.269303468s" podCreationTimestamp="2026-03-07 01:20:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:20:38.235825618 +0000 UTC m=+1.164077980" watchObservedRunningTime="2026-03-07 01:20:38.269303468 +0000 UTC m=+1.197555930" Mar 7 01:20:38.285677 kubelet[3226]: I0307 01:20:38.285268 3226 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-8271a56a8b" podStartSLOduration=5.285253582 podStartE2EDuration="5.285253582s" podCreationTimestamp="2026-03-07 01:20:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:20:38.27095389 +0000 UTC m=+1.199206352" watchObservedRunningTime="2026-03-07 01:20:38.285253582 +0000 UTC m=+1.213505944" Mar 7 01:20:39.257354 kubelet[3226]: I0307 01:20:39.257318 3226 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 7 01:20:39.258000 containerd[1696]: time="2026-03-07T01:20:39.257911764Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 7 01:20:39.258439 kubelet[3226]: I0307 01:20:39.258175 3226 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 7 01:20:40.286984 systemd[1]: Created slice kubepods-besteffort-podd602e9bf_e257_4e22_8702_525e51b93e90.slice - libcontainer container kubepods-besteffort-podd602e9bf_e257_4e22_8702_525e51b93e90.slice. Mar 7 01:20:40.296577 kubelet[3226]: I0307 01:20:40.296230 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d602e9bf-e257-4e22-8702-525e51b93e90-lib-modules\") pod \"kube-proxy-lv6wg\" (UID: \"d602e9bf-e257-4e22-8702-525e51b93e90\") " pod="kube-system/kube-proxy-lv6wg" Mar 7 01:20:40.296577 kubelet[3226]: I0307 01:20:40.296307 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrsnk\" (UniqueName: \"kubernetes.io/projected/d602e9bf-e257-4e22-8702-525e51b93e90-kube-api-access-jrsnk\") pod \"kube-proxy-lv6wg\" (UID: \"d602e9bf-e257-4e22-8702-525e51b93e90\") " pod="kube-system/kube-proxy-lv6wg" Mar 7 01:20:40.296577 kubelet[3226]: I0307 01:20:40.296394 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d602e9bf-e257-4e22-8702-525e51b93e90-kube-proxy\") pod \"kube-proxy-lv6wg\" (UID: \"d602e9bf-e257-4e22-8702-525e51b93e90\") " pod="kube-system/kube-proxy-lv6wg" Mar 7 01:20:40.296577 kubelet[3226]: I0307 01:20:40.296421 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d602e9bf-e257-4e22-8702-525e51b93e90-xtables-lock\") pod \"kube-proxy-lv6wg\" (UID: \"d602e9bf-e257-4e22-8702-525e51b93e90\") " pod="kube-system/kube-proxy-lv6wg" Mar 7 01:20:40.520524 systemd[1]: Created slice kubepods-besteffort-podf57521eb_24a3_403a_97ce_eeba6ad0a25f.slice - libcontainer container kubepods-besteffort-podf57521eb_24a3_403a_97ce_eeba6ad0a25f.slice. Mar 7 01:20:40.597980 kubelet[3226]: I0307 01:20:40.597926 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gslnq\" (UniqueName: \"kubernetes.io/projected/f57521eb-24a3-403a-97ce-eeba6ad0a25f-kube-api-access-gslnq\") pod \"tigera-operator-6cf4cccc57-hmsqz\" (UID: \"f57521eb-24a3-403a-97ce-eeba6ad0a25f\") " pod="tigera-operator/tigera-operator-6cf4cccc57-hmsqz" Mar 7 01:20:40.597980 kubelet[3226]: I0307 01:20:40.597975 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f57521eb-24a3-403a-97ce-eeba6ad0a25f-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-hmsqz\" (UID: \"f57521eb-24a3-403a-97ce-eeba6ad0a25f\") " pod="tigera-operator/tigera-operator-6cf4cccc57-hmsqz" Mar 7 01:20:40.602019 containerd[1696]: time="2026-03-07T01:20:40.601637237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lv6wg,Uid:d602e9bf-e257-4e22-8702-525e51b93e90,Namespace:kube-system,Attempt:0,}" Mar 7 01:20:40.647723 containerd[1696]: time="2026-03-07T01:20:40.647579555Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:20:40.648021 containerd[1696]: time="2026-03-07T01:20:40.647651256Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:20:40.648021 containerd[1696]: time="2026-03-07T01:20:40.647771258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:20:40.648668 containerd[1696]: time="2026-03-07T01:20:40.648614469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:20:40.674297 systemd[1]: Started cri-containerd-49ed0590241536b0f6df3d4add41329e163fce8d0e29f72e974cd62d40b8aba8.scope - libcontainer container 49ed0590241536b0f6df3d4add41329e163fce8d0e29f72e974cd62d40b8aba8. Mar 7 01:20:40.698123 containerd[1696]: time="2026-03-07T01:20:40.697746180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lv6wg,Uid:d602e9bf-e257-4e22-8702-525e51b93e90,Namespace:kube-system,Attempt:0,} returns sandbox id \"49ed0590241536b0f6df3d4add41329e163fce8d0e29f72e974cd62d40b8aba8\"" Mar 7 01:20:40.713134 containerd[1696]: time="2026-03-07T01:20:40.709867657Z" level=info msg="CreateContainer within sandbox \"49ed0590241536b0f6df3d4add41329e163fce8d0e29f72e974cd62d40b8aba8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 7 01:20:40.755550 containerd[1696]: time="2026-03-07T01:20:40.755496522Z" level=info msg="CreateContainer within sandbox \"49ed0590241536b0f6df3d4add41329e163fce8d0e29f72e974cd62d40b8aba8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8b5bc9cc962b8a13013be373f4c7812054f681ff3559876621f95991ec8feb52\"" Mar 7 01:20:40.756458 containerd[1696]: time="2026-03-07T01:20:40.756422436Z" level=info msg="StartContainer for \"8b5bc9cc962b8a13013be373f4c7812054f681ff3559876621f95991ec8feb52\"" Mar 7 01:20:40.786294 systemd[1]: Started cri-containerd-8b5bc9cc962b8a13013be373f4c7812054f681ff3559876621f95991ec8feb52.scope - libcontainer container 8b5bc9cc962b8a13013be373f4c7812054f681ff3559876621f95991ec8feb52. Mar 7 01:20:40.821119 containerd[1696]: time="2026-03-07T01:20:40.821063078Z" level=info msg="StartContainer for \"8b5bc9cc962b8a13013be373f4c7812054f681ff3559876621f95991ec8feb52\" returns successfully" Mar 7 01:20:40.829803 containerd[1696]: time="2026-03-07T01:20:40.829404899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-hmsqz,Uid:f57521eb-24a3-403a-97ce-eeba6ad0a25f,Namespace:tigera-operator,Attempt:0,}" Mar 7 01:20:40.878412 containerd[1696]: time="2026-03-07T01:20:40.877950107Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:20:40.878412 containerd[1696]: time="2026-03-07T01:20:40.878024108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:20:40.878412 containerd[1696]: time="2026-03-07T01:20:40.878041408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:20:40.878412 containerd[1696]: time="2026-03-07T01:20:40.878154310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:20:40.900766 systemd[1]: Started cri-containerd-d4434d371a167d514aca1baa32e69953e88ba923ff3a6a2e1b366e165781a024.scope - libcontainer container d4434d371a167d514aca1baa32e69953e88ba923ff3a6a2e1b366e165781a024. Mar 7 01:20:40.948036 containerd[1696]: time="2026-03-07T01:20:40.947990128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-hmsqz,Uid:f57521eb-24a3-403a-97ce-eeba6ad0a25f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d4434d371a167d514aca1baa32e69953e88ba923ff3a6a2e1b366e165781a024\"" Mar 7 01:20:40.951387 containerd[1696]: time="2026-03-07T01:20:40.949997357Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 7 01:20:41.255714 kubelet[3226]: I0307 01:20:41.255425 3226 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-lv6wg" podStartSLOduration=1.255405308 podStartE2EDuration="1.255405308s" podCreationTimestamp="2026-03-07 01:20:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:20:41.254228191 +0000 UTC m=+4.182480553" watchObservedRunningTime="2026-03-07 01:20:41.255405308 +0000 UTC m=+4.183657670" Mar 7 01:20:42.719543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4225188274.mount: Deactivated successfully. Mar 7 01:20:43.721085 containerd[1696]: time="2026-03-07T01:20:43.721034044Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:20:43.723143 containerd[1696]: time="2026-03-07T01:20:43.723063974Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 7 01:20:43.726861 containerd[1696]: time="2026-03-07T01:20:43.726802828Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:20:43.731926 containerd[1696]: time="2026-03-07T01:20:43.731874602Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:20:43.733167 containerd[1696]: time="2026-03-07T01:20:43.732569912Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.782531455s" Mar 7 01:20:43.733167 containerd[1696]: time="2026-03-07T01:20:43.732611613Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 7 01:20:43.739958 containerd[1696]: time="2026-03-07T01:20:43.739924419Z" level=info msg="CreateContainer within sandbox \"d4434d371a167d514aca1baa32e69953e88ba923ff3a6a2e1b366e165781a024\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 7 01:20:43.771280 containerd[1696]: time="2026-03-07T01:20:43.771229076Z" level=info msg="CreateContainer within sandbox \"d4434d371a167d514aca1baa32e69953e88ba923ff3a6a2e1b366e165781a024\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"784c16bae04879d0397f179104a5f07aa6928f3c06c9c8482068820d98c2b809\"" Mar 7 01:20:43.772730 containerd[1696]: time="2026-03-07T01:20:43.771855385Z" level=info msg="StartContainer for \"784c16bae04879d0397f179104a5f07aa6928f3c06c9c8482068820d98c2b809\"" Mar 7 01:20:43.804286 systemd[1]: Started cri-containerd-784c16bae04879d0397f179104a5f07aa6928f3c06c9c8482068820d98c2b809.scope - libcontainer container 784c16bae04879d0397f179104a5f07aa6928f3c06c9c8482068820d98c2b809. Mar 7 01:20:43.837893 containerd[1696]: time="2026-03-07T01:20:43.837749145Z" level=info msg="StartContainer for \"784c16bae04879d0397f179104a5f07aa6928f3c06c9c8482068820d98c2b809\" returns successfully" Mar 7 01:20:47.368713 kubelet[3226]: I0307 01:20:47.368625 3226 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-hmsqz" podStartSLOduration=4.584784734 podStartE2EDuration="7.368608107s" podCreationTimestamp="2026-03-07 01:20:40 +0000 UTC" firstStartedPulling="2026-03-07 01:20:40.949641952 +0000 UTC m=+3.877894414" lastFinishedPulling="2026-03-07 01:20:43.733465325 +0000 UTC m=+6.661717787" observedRunningTime="2026-03-07 01:20:44.257354261 +0000 UTC m=+7.185606723" watchObservedRunningTime="2026-03-07 01:20:47.368608107 +0000 UTC m=+10.296860569" Mar 7 01:20:48.364009 sudo[2325]: pam_unix(sudo:session): session closed for user root Mar 7 01:20:48.466877 sshd[2322]: pam_unix(sshd:session): session closed for user core Mar 7 01:20:48.471886 systemd[1]: sshd@6-10.200.8.18:22-10.200.16.10:49200.service: Deactivated successfully. Mar 7 01:20:48.472364 systemd-logind[1672]: Session 9 logged out. Waiting for processes to exit. Mar 7 01:20:48.476372 systemd[1]: session-9.scope: Deactivated successfully. Mar 7 01:20:48.478222 systemd[1]: session-9.scope: Consumed 3.531s CPU time, 159.6M memory peak, 0B memory swap peak. Mar 7 01:20:48.482614 systemd-logind[1672]: Removed session 9. Mar 7 01:20:52.228034 systemd[1]: Created slice kubepods-besteffort-pod4645c932_e77d_4c00_84ae_d575648b3c12.slice - libcontainer container kubepods-besteffort-pod4645c932_e77d_4c00_84ae_d575648b3c12.slice. Mar 7 01:20:52.276588 kubelet[3226]: I0307 01:20:52.276467 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnz6w\" (UniqueName: \"kubernetes.io/projected/4645c932-e77d-4c00-84ae-d575648b3c12-kube-api-access-bnz6w\") pod \"calico-typha-bb9c99857-6ntqz\" (UID: \"4645c932-e77d-4c00-84ae-d575648b3c12\") " pod="calico-system/calico-typha-bb9c99857-6ntqz" Mar 7 01:20:52.276588 kubelet[3226]: I0307 01:20:52.276503 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4645c932-e77d-4c00-84ae-d575648b3c12-tigera-ca-bundle\") pod \"calico-typha-bb9c99857-6ntqz\" (UID: \"4645c932-e77d-4c00-84ae-d575648b3c12\") " pod="calico-system/calico-typha-bb9c99857-6ntqz" Mar 7 01:20:52.276588 kubelet[3226]: I0307 01:20:52.276519 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4645c932-e77d-4c00-84ae-d575648b3c12-typha-certs\") pod \"calico-typha-bb9c99857-6ntqz\" (UID: \"4645c932-e77d-4c00-84ae-d575648b3c12\") " pod="calico-system/calico-typha-bb9c99857-6ntqz" Mar 7 01:20:52.356275 systemd[1]: Created slice kubepods-besteffort-pod1dd94a39_3f4a_47ea_b304_4897da148153.slice - libcontainer container kubepods-besteffort-pod1dd94a39_3f4a_47ea_b304_4897da148153.slice. Mar 7 01:20:52.378389 kubelet[3226]: I0307 01:20:52.377156 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1dd94a39-3f4a-47ea-b304-4897da148153-cni-bin-dir\") pod \"calico-node-fwssz\" (UID: \"1dd94a39-3f4a-47ea-b304-4897da148153\") " pod="calico-system/calico-node-fwssz" Mar 7 01:20:52.378389 kubelet[3226]: I0307 01:20:52.377203 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1dd94a39-3f4a-47ea-b304-4897da148153-cni-log-dir\") pod \"calico-node-fwssz\" (UID: \"1dd94a39-3f4a-47ea-b304-4897da148153\") " pod="calico-system/calico-node-fwssz" Mar 7 01:20:52.378389 kubelet[3226]: I0307 01:20:52.377224 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1dd94a39-3f4a-47ea-b304-4897da148153-tigera-ca-bundle\") pod \"calico-node-fwssz\" (UID: \"1dd94a39-3f4a-47ea-b304-4897da148153\") " pod="calico-system/calico-node-fwssz" Mar 7 01:20:52.378389 kubelet[3226]: I0307 01:20:52.377269 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1dd94a39-3f4a-47ea-b304-4897da148153-var-run-calico\") pod \"calico-node-fwssz\" (UID: \"1dd94a39-3f4a-47ea-b304-4897da148153\") " pod="calico-system/calico-node-fwssz" Mar 7 01:20:52.378389 kubelet[3226]: I0307 01:20:52.377289 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1dd94a39-3f4a-47ea-b304-4897da148153-xtables-lock\") pod \"calico-node-fwssz\" (UID: \"1dd94a39-3f4a-47ea-b304-4897da148153\") " pod="calico-system/calico-node-fwssz" Mar 7 01:20:52.378732 kubelet[3226]: I0307 01:20:52.377310 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl8j8\" (UniqueName: \"kubernetes.io/projected/1dd94a39-3f4a-47ea-b304-4897da148153-kube-api-access-fl8j8\") pod \"calico-node-fwssz\" (UID: \"1dd94a39-3f4a-47ea-b304-4897da148153\") " pod="calico-system/calico-node-fwssz" Mar 7 01:20:52.378732 kubelet[3226]: I0307 01:20:52.377349 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1dd94a39-3f4a-47ea-b304-4897da148153-flexvol-driver-host\") pod \"calico-node-fwssz\" (UID: \"1dd94a39-3f4a-47ea-b304-4897da148153\") " pod="calico-system/calico-node-fwssz" Mar 7 01:20:52.378732 kubelet[3226]: I0307 01:20:52.377371 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1dd94a39-3f4a-47ea-b304-4897da148153-node-certs\") pod \"calico-node-fwssz\" (UID: \"1dd94a39-3f4a-47ea-b304-4897da148153\") " pod="calico-system/calico-node-fwssz" Mar 7 01:20:52.378732 kubelet[3226]: I0307 01:20:52.377390 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1dd94a39-3f4a-47ea-b304-4897da148153-var-lib-calico\") pod \"calico-node-fwssz\" (UID: \"1dd94a39-3f4a-47ea-b304-4897da148153\") " pod="calico-system/calico-node-fwssz" Mar 7 01:20:52.378732 kubelet[3226]: I0307 01:20:52.377411 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1dd94a39-3f4a-47ea-b304-4897da148153-cni-net-dir\") pod \"calico-node-fwssz\" (UID: \"1dd94a39-3f4a-47ea-b304-4897da148153\") " pod="calico-system/calico-node-fwssz" Mar 7 01:20:52.378947 kubelet[3226]: I0307 01:20:52.377430 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1dd94a39-3f4a-47ea-b304-4897da148153-lib-modules\") pod \"calico-node-fwssz\" (UID: \"1dd94a39-3f4a-47ea-b304-4897da148153\") " pod="calico-system/calico-node-fwssz" Mar 7 01:20:52.378947 kubelet[3226]: I0307 01:20:52.377453 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/1dd94a39-3f4a-47ea-b304-4897da148153-sys-fs\") pod \"calico-node-fwssz\" (UID: \"1dd94a39-3f4a-47ea-b304-4897da148153\") " pod="calico-system/calico-node-fwssz" Mar 7 01:20:52.378947 kubelet[3226]: I0307 01:20:52.377473 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/1dd94a39-3f4a-47ea-b304-4897da148153-nodeproc\") pod \"calico-node-fwssz\" (UID: \"1dd94a39-3f4a-47ea-b304-4897da148153\") " pod="calico-system/calico-node-fwssz" Mar 7 01:20:52.378947 kubelet[3226]: I0307 01:20:52.377493 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1dd94a39-3f4a-47ea-b304-4897da148153-policysync\") pod \"calico-node-fwssz\" (UID: \"1dd94a39-3f4a-47ea-b304-4897da148153\") " pod="calico-system/calico-node-fwssz" Mar 7 01:20:52.378947 kubelet[3226]: I0307 01:20:52.377531 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/1dd94a39-3f4a-47ea-b304-4897da148153-bpffs\") pod \"calico-node-fwssz\" (UID: \"1dd94a39-3f4a-47ea-b304-4897da148153\") " pod="calico-system/calico-node-fwssz" Mar 7 01:20:52.495165 kubelet[3226]: E0307 01:20:52.494215 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.496474 kubelet[3226]: W0307 01:20:52.495690 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.497280 kubelet[3226]: E0307 01:20:52.496640 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.504085 kubelet[3226]: E0307 01:20:52.503774 3226 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l6gbz" podUID="7ade0512-4109-41d7-94b7-3d8c1b62c155" Mar 7 01:20:52.521256 kubelet[3226]: E0307 01:20:52.521224 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.521256 kubelet[3226]: W0307 01:20:52.521252 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.521449 kubelet[3226]: E0307 01:20:52.521278 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.539222 containerd[1696]: time="2026-03-07T01:20:52.539174859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bb9c99857-6ntqz,Uid:4645c932-e77d-4c00-84ae-d575648b3c12,Namespace:calico-system,Attempt:0,}" Mar 7 01:20:52.569428 kubelet[3226]: E0307 01:20:52.569383 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.569428 kubelet[3226]: W0307 01:20:52.569423 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.569613 kubelet[3226]: E0307 01:20:52.569455 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.569956 kubelet[3226]: E0307 01:20:52.569919 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.569956 kubelet[3226]: W0307 01:20:52.569951 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.570109 kubelet[3226]: E0307 01:20:52.569975 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.574674 kubelet[3226]: E0307 01:20:52.572129 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.574674 kubelet[3226]: W0307 01:20:52.572150 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.574674 kubelet[3226]: E0307 01:20:52.572168 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.574674 kubelet[3226]: E0307 01:20:52.572443 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.574674 kubelet[3226]: W0307 01:20:52.572453 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.574674 kubelet[3226]: E0307 01:20:52.572465 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.575002 kubelet[3226]: E0307 01:20:52.574700 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.575002 kubelet[3226]: W0307 01:20:52.574728 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.575002 kubelet[3226]: E0307 01:20:52.574745 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.575228 kubelet[3226]: E0307 01:20:52.575208 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.575228 kubelet[3226]: W0307 01:20:52.575227 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.575336 kubelet[3226]: E0307 01:20:52.575245 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.578343 kubelet[3226]: E0307 01:20:52.576592 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.578343 kubelet[3226]: W0307 01:20:52.576609 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.578343 kubelet[3226]: E0307 01:20:52.576807 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.579007 kubelet[3226]: E0307 01:20:52.578982 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.579007 kubelet[3226]: W0307 01:20:52.579002 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.579159 kubelet[3226]: E0307 01:20:52.579017 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.582118 kubelet[3226]: E0307 01:20:52.581467 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.582118 kubelet[3226]: W0307 01:20:52.581484 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.582118 kubelet[3226]: E0307 01:20:52.581500 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.582839 kubelet[3226]: E0307 01:20:52.582818 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.582839 kubelet[3226]: W0307 01:20:52.582837 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.582958 kubelet[3226]: E0307 01:20:52.582853 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.583929 kubelet[3226]: E0307 01:20:52.583898 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.583929 kubelet[3226]: W0307 01:20:52.583925 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.584072 kubelet[3226]: E0307 01:20:52.583940 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.584987 kubelet[3226]: E0307 01:20:52.584898 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.584987 kubelet[3226]: W0307 01:20:52.584919 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.584987 kubelet[3226]: E0307 01:20:52.584934 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.585422 kubelet[3226]: E0307 01:20:52.585404 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.585422 kubelet[3226]: W0307 01:20:52.585419 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.585575 kubelet[3226]: E0307 01:20:52.585437 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.586692 kubelet[3226]: E0307 01:20:52.585948 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.586692 kubelet[3226]: W0307 01:20:52.585965 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.586692 kubelet[3226]: E0307 01:20:52.585981 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.586692 kubelet[3226]: E0307 01:20:52.586500 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.586692 kubelet[3226]: W0307 01:20:52.586513 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.586692 kubelet[3226]: E0307 01:20:52.586529 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.587197 kubelet[3226]: E0307 01:20:52.587182 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.587300 kubelet[3226]: W0307 01:20:52.587287 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.587381 kubelet[3226]: E0307 01:20:52.587367 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.587808 kubelet[3226]: E0307 01:20:52.587792 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.587908 kubelet[3226]: W0307 01:20:52.587895 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.587990 kubelet[3226]: E0307 01:20:52.587977 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.588309 kubelet[3226]: E0307 01:20:52.588294 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.588477 kubelet[3226]: W0307 01:20:52.588399 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.588477 kubelet[3226]: E0307 01:20:52.588422 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.588915 kubelet[3226]: E0307 01:20:52.588796 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.588915 kubelet[3226]: W0307 01:20:52.588810 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.588915 kubelet[3226]: E0307 01:20:52.588824 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.589090 kubelet[3226]: E0307 01:20:52.589051 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.589090 kubelet[3226]: W0307 01:20:52.589062 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.589090 kubelet[3226]: E0307 01:20:52.589075 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.589713 kubelet[3226]: E0307 01:20:52.589577 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.589713 kubelet[3226]: W0307 01:20:52.589591 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.589713 kubelet[3226]: E0307 01:20:52.589606 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.589713 kubelet[3226]: I0307 01:20:52.589638 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7ade0512-4109-41d7-94b7-3d8c1b62c155-registration-dir\") pod \"csi-node-driver-l6gbz\" (UID: \"7ade0512-4109-41d7-94b7-3d8c1b62c155\") " pod="calico-system/csi-node-driver-l6gbz" Mar 7 01:20:52.592670 kubelet[3226]: E0307 01:20:52.592634 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.592670 kubelet[3226]: W0307 01:20:52.592657 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.592794 kubelet[3226]: E0307 01:20:52.592674 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.592837 kubelet[3226]: I0307 01:20:52.592806 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7ade0512-4109-41d7-94b7-3d8c1b62c155-varrun\") pod \"csi-node-driver-l6gbz\" (UID: \"7ade0512-4109-41d7-94b7-3d8c1b62c155\") " pod="calico-system/csi-node-driver-l6gbz" Mar 7 01:20:52.593665 kubelet[3226]: E0307 01:20:52.593639 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.593665 kubelet[3226]: W0307 01:20:52.593661 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.593972 kubelet[3226]: E0307 01:20:52.593947 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.594159 kubelet[3226]: I0307 01:20:52.594135 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nl8t\" (UniqueName: \"kubernetes.io/projected/7ade0512-4109-41d7-94b7-3d8c1b62c155-kube-api-access-6nl8t\") pod \"csi-node-driver-l6gbz\" (UID: \"7ade0512-4109-41d7-94b7-3d8c1b62c155\") " pod="calico-system/csi-node-driver-l6gbz" Mar 7 01:20:52.594824 kubelet[3226]: E0307 01:20:52.594803 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.594824 kubelet[3226]: W0307 01:20:52.594821 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.594943 kubelet[3226]: E0307 01:20:52.594838 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.595618 kubelet[3226]: E0307 01:20:52.595591 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.595618 kubelet[3226]: W0307 01:20:52.595610 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.595743 kubelet[3226]: E0307 01:20:52.595626 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.596618 kubelet[3226]: E0307 01:20:52.596596 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.596618 kubelet[3226]: W0307 01:20:52.596616 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.596742 kubelet[3226]: E0307 01:20:52.596632 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.597574 kubelet[3226]: I0307 01:20:52.596926 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7ade0512-4109-41d7-94b7-3d8c1b62c155-kubelet-dir\") pod \"csi-node-driver-l6gbz\" (UID: \"7ade0512-4109-41d7-94b7-3d8c1b62c155\") " pod="calico-system/csi-node-driver-l6gbz" Mar 7 01:20:52.597666 kubelet[3226]: E0307 01:20:52.597657 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.597722 kubelet[3226]: W0307 01:20:52.597669 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.597722 kubelet[3226]: E0307 01:20:52.597684 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.598623 kubelet[3226]: E0307 01:20:52.598598 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.598623 kubelet[3226]: W0307 01:20:52.598619 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.598743 kubelet[3226]: E0307 01:20:52.598638 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.601909 kubelet[3226]: E0307 01:20:52.601892 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.602131 kubelet[3226]: W0307 01:20:52.602017 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.602131 kubelet[3226]: E0307 01:20:52.602042 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.603177 kubelet[3226]: E0307 01:20:52.603156 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.603177 kubelet[3226]: W0307 01:20:52.603175 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.603775 kubelet[3226]: E0307 01:20:52.603190 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.603775 kubelet[3226]: E0307 01:20:52.603663 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.603775 kubelet[3226]: W0307 01:20:52.603676 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.603775 kubelet[3226]: E0307 01:20:52.603690 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.604500 kubelet[3226]: E0307 01:20:52.604362 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.604500 kubelet[3226]: W0307 01:20:52.604376 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.604500 kubelet[3226]: E0307 01:20:52.604391 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.604758 containerd[1696]: time="2026-03-07T01:20:52.603923053Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:20:52.604758 containerd[1696]: time="2026-03-07T01:20:52.604014655Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:20:52.604758 containerd[1696]: time="2026-03-07T01:20:52.604037155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:20:52.604758 containerd[1696]: time="2026-03-07T01:20:52.604239959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:20:52.605974 kubelet[3226]: E0307 01:20:52.605732 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.605974 kubelet[3226]: W0307 01:20:52.605749 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.605974 kubelet[3226]: E0307 01:20:52.605764 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.605974 kubelet[3226]: I0307 01:20:52.605797 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7ade0512-4109-41d7-94b7-3d8c1b62c155-socket-dir\") pod \"csi-node-driver-l6gbz\" (UID: \"7ade0512-4109-41d7-94b7-3d8c1b62c155\") " pod="calico-system/csi-node-driver-l6gbz" Mar 7 01:20:52.606210 kubelet[3226]: E0307 01:20:52.606122 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.606210 kubelet[3226]: W0307 01:20:52.606139 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.606210 kubelet[3226]: E0307 01:20:52.606155 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.606492 kubelet[3226]: E0307 01:20:52.606474 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.606492 kubelet[3226]: W0307 01:20:52.606490 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.606601 kubelet[3226]: E0307 01:20:52.606504 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.629908 systemd[1]: Started cri-containerd-6bc88fc945e866777e57b2dc0b85697ca2a2b3922e7183b0466c91780c309709.scope - libcontainer container 6bc88fc945e866777e57b2dc0b85697ca2a2b3922e7183b0466c91780c309709. Mar 7 01:20:52.668490 containerd[1696]: time="2026-03-07T01:20:52.668432343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fwssz,Uid:1dd94a39-3f4a-47ea-b304-4897da148153,Namespace:calico-system,Attempt:0,}" Mar 7 01:20:52.706865 kubelet[3226]: E0307 01:20:52.706664 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.706865 kubelet[3226]: W0307 01:20:52.706689 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.706865 kubelet[3226]: E0307 01:20:52.706716 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.707701 kubelet[3226]: E0307 01:20:52.707065 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.707701 kubelet[3226]: W0307 01:20:52.707078 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.707701 kubelet[3226]: E0307 01:20:52.707114 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.707701 kubelet[3226]: E0307 01:20:52.707422 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.707701 kubelet[3226]: W0307 01:20:52.707434 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.707701 kubelet[3226]: E0307 01:20:52.707448 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.709129 kubelet[3226]: E0307 01:20:52.708159 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.709129 kubelet[3226]: W0307 01:20:52.708181 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.709129 kubelet[3226]: E0307 01:20:52.708197 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.709524 kubelet[3226]: E0307 01:20:52.709506 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.709524 kubelet[3226]: W0307 01:20:52.709524 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.709647 kubelet[3226]: E0307 01:20:52.709539 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.710282 kubelet[3226]: E0307 01:20:52.710174 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.710282 kubelet[3226]: W0307 01:20:52.710189 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.710282 kubelet[3226]: E0307 01:20:52.710205 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.711361 kubelet[3226]: E0307 01:20:52.711079 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.711361 kubelet[3226]: W0307 01:20:52.711356 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.711672 kubelet[3226]: E0307 01:20:52.711377 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.711956 kubelet[3226]: E0307 01:20:52.711912 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.711956 kubelet[3226]: W0307 01:20:52.711931 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.711956 kubelet[3226]: E0307 01:20:52.711946 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.714128 kubelet[3226]: E0307 01:20:52.712423 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.714128 kubelet[3226]: W0307 01:20:52.712438 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.714128 kubelet[3226]: E0307 01:20:52.712452 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.714128 kubelet[3226]: E0307 01:20:52.712884 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.714128 kubelet[3226]: W0307 01:20:52.712896 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.714128 kubelet[3226]: E0307 01:20:52.712909 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.714128 kubelet[3226]: E0307 01:20:52.713355 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.714128 kubelet[3226]: W0307 01:20:52.713368 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.714128 kubelet[3226]: E0307 01:20:52.713382 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.714128 kubelet[3226]: E0307 01:20:52.713790 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.714599 kubelet[3226]: W0307 01:20:52.713808 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.714599 kubelet[3226]: E0307 01:20:52.713828 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.714599 kubelet[3226]: E0307 01:20:52.714440 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.714599 kubelet[3226]: W0307 01:20:52.714454 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.714599 kubelet[3226]: E0307 01:20:52.714469 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.714985 kubelet[3226]: E0307 01:20:52.714967 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.715034 kubelet[3226]: W0307 01:20:52.714985 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.715034 kubelet[3226]: E0307 01:20:52.714999 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.715291 kubelet[3226]: E0307 01:20:52.715269 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.715291 kubelet[3226]: W0307 01:20:52.715290 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.715404 kubelet[3226]: E0307 01:20:52.715305 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.716216 kubelet[3226]: E0307 01:20:52.716180 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.716216 kubelet[3226]: W0307 01:20:52.716198 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.716216 kubelet[3226]: E0307 01:20:52.716214 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.716941 kubelet[3226]: E0307 01:20:52.716921 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.716941 kubelet[3226]: W0307 01:20:52.716940 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.717125 kubelet[3226]: E0307 01:20:52.716954 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.717817 kubelet[3226]: E0307 01:20:52.717563 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.717817 kubelet[3226]: W0307 01:20:52.717582 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.717817 kubelet[3226]: E0307 01:20:52.717597 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.718715 kubelet[3226]: E0307 01:20:52.718152 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.718715 kubelet[3226]: W0307 01:20:52.718169 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.718715 kubelet[3226]: E0307 01:20:52.718183 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.719012 kubelet[3226]: E0307 01:20:52.718918 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.719080 kubelet[3226]: W0307 01:20:52.718932 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.719080 kubelet[3226]: E0307 01:20:52.719046 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.719677 kubelet[3226]: E0307 01:20:52.719655 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.719677 kubelet[3226]: W0307 01:20:52.719675 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.719793 kubelet[3226]: E0307 01:20:52.719691 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.720378 kubelet[3226]: E0307 01:20:52.720353 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.720378 kubelet[3226]: W0307 01:20:52.720375 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.720500 kubelet[3226]: E0307 01:20:52.720391 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.721498 kubelet[3226]: E0307 01:20:52.721181 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.721498 kubelet[3226]: W0307 01:20:52.721200 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.721498 kubelet[3226]: E0307 01:20:52.721214 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.721673 kubelet[3226]: E0307 01:20:52.721656 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.721673 kubelet[3226]: W0307 01:20:52.721668 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.721759 kubelet[3226]: E0307 01:20:52.721683 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.722311 kubelet[3226]: E0307 01:20:52.722290 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.722311 kubelet[3226]: W0307 01:20:52.722309 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.722435 kubelet[3226]: E0307 01:20:52.722324 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.731111 containerd[1696]: time="2026-03-07T01:20:52.731060002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bb9c99857-6ntqz,Uid:4645c932-e77d-4c00-84ae-d575648b3c12,Namespace:calico-system,Attempt:0,} returns sandbox id \"6bc88fc945e866777e57b2dc0b85697ca2a2b3922e7183b0466c91780c309709\"" Mar 7 01:20:52.736370 containerd[1696]: time="2026-03-07T01:20:52.736243389Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 7 01:20:52.737955 kubelet[3226]: E0307 01:20:52.737929 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:52.738130 kubelet[3226]: W0307 01:20:52.737953 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:52.738130 kubelet[3226]: E0307 01:20:52.737994 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:52.741712 containerd[1696]: time="2026-03-07T01:20:52.740373259Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:20:52.741712 containerd[1696]: time="2026-03-07T01:20:52.741483278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:20:52.741712 containerd[1696]: time="2026-03-07T01:20:52.741520878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:20:52.741953 containerd[1696]: time="2026-03-07T01:20:52.741837684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:20:52.764282 systemd[1]: Started cri-containerd-8fa65309ca543c16385945ac070c83f8b5b6fa758c6d1c79477c41eccf429243.scope - libcontainer container 8fa65309ca543c16385945ac070c83f8b5b6fa758c6d1c79477c41eccf429243. Mar 7 01:20:52.801451 containerd[1696]: time="2026-03-07T01:20:52.801407990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fwssz,Uid:1dd94a39-3f4a-47ea-b304-4897da148153,Namespace:calico-system,Attempt:0,} returns sandbox id \"8fa65309ca543c16385945ac070c83f8b5b6fa758c6d1c79477c41eccf429243\"" Mar 7 01:20:53.295381 kubelet[3226]: E0307 01:20:53.295343 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:53.296168 kubelet[3226]: W0307 01:20:53.295364 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:53.296168 kubelet[3226]: E0307 01:20:53.295446 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:53.296168 kubelet[3226]: E0307 01:20:53.295716 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:53.296168 kubelet[3226]: W0307 01:20:53.295753 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:53.296168 kubelet[3226]: E0307 01:20:53.295769 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:53.296168 kubelet[3226]: E0307 01:20:53.296039 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:53.296168 kubelet[3226]: W0307 01:20:53.296051 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:53.296168 kubelet[3226]: E0307 01:20:53.296078 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:53.296731 kubelet[3226]: E0307 01:20:53.296714 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:53.296731 kubelet[3226]: W0307 01:20:53.296730 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:53.296861 kubelet[3226]: E0307 01:20:53.296746 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:53.297704 kubelet[3226]: E0307 01:20:53.297684 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:53.297704 kubelet[3226]: W0307 01:20:53.297704 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:53.297840 kubelet[3226]: E0307 01:20:53.297719 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:54.110260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3628760071.mount: Deactivated successfully. Mar 7 01:20:54.198365 kubelet[3226]: E0307 01:20:54.198299 3226 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l6gbz" podUID="7ade0512-4109-41d7-94b7-3d8c1b62c155" Mar 7 01:20:55.281784 containerd[1696]: time="2026-03-07T01:20:55.281734784Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:20:55.288028 containerd[1696]: time="2026-03-07T01:20:55.287937571Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 7 01:20:55.291529 containerd[1696]: time="2026-03-07T01:20:55.291379520Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:20:55.296953 containerd[1696]: time="2026-03-07T01:20:55.296886297Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:20:55.298020 containerd[1696]: time="2026-03-07T01:20:55.297584307Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.561284316s" Mar 7 01:20:55.298020 containerd[1696]: time="2026-03-07T01:20:55.297625707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 7 01:20:55.299797 containerd[1696]: time="2026-03-07T01:20:55.299757637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 7 01:20:55.322335 containerd[1696]: time="2026-03-07T01:20:55.322287453Z" level=info msg="CreateContainer within sandbox \"6bc88fc945e866777e57b2dc0b85697ca2a2b3922e7183b0466c91780c309709\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 7 01:20:55.356084 containerd[1696]: time="2026-03-07T01:20:55.356029327Z" level=info msg="CreateContainer within sandbox \"6bc88fc945e866777e57b2dc0b85697ca2a2b3922e7183b0466c91780c309709\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3c925e1ea89921b1d98edfe2e6bf5f3ab10a859082d0026182207e6fe5da6deb\"" Mar 7 01:20:55.356942 containerd[1696]: time="2026-03-07T01:20:55.356867639Z" level=info msg="StartContainer for \"3c925e1ea89921b1d98edfe2e6bf5f3ab10a859082d0026182207e6fe5da6deb\"" Mar 7 01:20:55.392295 systemd[1]: Started cri-containerd-3c925e1ea89921b1d98edfe2e6bf5f3ab10a859082d0026182207e6fe5da6deb.scope - libcontainer container 3c925e1ea89921b1d98edfe2e6bf5f3ab10a859082d0026182207e6fe5da6deb. Mar 7 01:20:55.446461 containerd[1696]: time="2026-03-07T01:20:55.446227993Z" level=info msg="StartContainer for \"3c925e1ea89921b1d98edfe2e6bf5f3ab10a859082d0026182207e6fe5da6deb\" returns successfully" Mar 7 01:20:56.197745 kubelet[3226]: E0307 01:20:56.197638 3226 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l6gbz" podUID="7ade0512-4109-41d7-94b7-3d8c1b62c155" Mar 7 01:20:56.316129 kubelet[3226]: I0307 01:20:56.315772 3226 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-bb9c99857-6ntqz" podStartSLOduration=1.750541021 podStartE2EDuration="4.315749901s" podCreationTimestamp="2026-03-07 01:20:52 +0000 UTC" firstStartedPulling="2026-03-07 01:20:52.733458442 +0000 UTC m=+15.661710804" lastFinishedPulling="2026-03-07 01:20:55.298667322 +0000 UTC m=+18.226919684" observedRunningTime="2026-03-07 01:20:56.29927947 +0000 UTC m=+19.227531832" watchObservedRunningTime="2026-03-07 01:20:56.315749901 +0000 UTC m=+19.244002263" Mar 7 01:20:56.319257 kubelet[3226]: E0307 01:20:56.319182 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:56.319257 kubelet[3226]: W0307 01:20:56.319258 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:56.319612 kubelet[3226]: E0307 01:20:56.319285 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:56.319612 kubelet[3226]: E0307 01:20:56.319533 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:56.319612 kubelet[3226]: W0307 01:20:56.319546 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:56.319612 kubelet[3226]: E0307 01:20:56.319562 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:56.321087 kubelet[3226]: E0307 01:20:56.321064 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:56.321087 kubelet[3226]: W0307 01:20:56.321085 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:56.321241 kubelet[3226]: E0307 01:20:56.321114 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:56.321446 kubelet[3226]: E0307 01:20:56.321429 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:56.321562 kubelet[3226]: W0307 01:20:56.321447 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:56.321562 kubelet[3226]: E0307 01:20:56.321463 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:56.321775 kubelet[3226]: E0307 01:20:56.321726 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:56.321775 kubelet[3226]: W0307 01:20:56.321738 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:56.321775 kubelet[3226]: E0307 01:20:56.321752 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:56.322006 kubelet[3226]: E0307 01:20:56.321980 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:56.322006 kubelet[3226]: W0307 01:20:56.321994 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:56.322257 kubelet[3226]: E0307 01:20:56.322009 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:56.322308 kubelet[3226]: E0307 01:20:56.322274 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:56.322308 kubelet[3226]: W0307 01:20:56.322286 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:56.322308 kubelet[3226]: E0307 01:20:56.322299 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:56.322904 kubelet[3226]: E0307 01:20:56.322884 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:56.322904 kubelet[3226]: W0307 01:20:56.322902 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:56.323304 kubelet[3226]: E0307 01:20:56.322917 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:56.323304 kubelet[3226]: E0307 01:20:56.323201 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:56.323304 kubelet[3226]: W0307 01:20:56.323213 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:56.323304 kubelet[3226]: E0307 01:20:56.323229 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:56.323749 kubelet[3226]: E0307 01:20:56.323434 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:56.323749 kubelet[3226]: W0307 01:20:56.323445 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:56.323749 kubelet[3226]: E0307 01:20:56.323458 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:56.323749 kubelet[3226]: E0307 01:20:56.323668 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:56.323749 kubelet[3226]: W0307 01:20:56.323680 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:56.323749 kubelet[3226]: E0307 01:20:56.323693 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:56.324342 kubelet[3226]: E0307 01:20:56.323903 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:56.324342 kubelet[3226]: W0307 01:20:56.323914 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:56.324342 kubelet[3226]: E0307 01:20:56.323926 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:56.325153 kubelet[3226]: E0307 01:20:56.324908 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:56.325153 kubelet[3226]: W0307 01:20:56.324925 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:56.325751 kubelet[3226]: E0307 01:20:56.325725 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:56.325999 kubelet[3226]: E0307 01:20:56.325983 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:56.325999 kubelet[3226]: W0307 01:20:56.325994 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:56.326141 kubelet[3226]: E0307 01:20:56.326009 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:56.326622 kubelet[3226]: E0307 01:20:56.326236 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:56.326622 kubelet[3226]: W0307 01:20:56.326247 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:56.326622 kubelet[3226]: E0307 01:20:56.326260 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:56.337083 kubelet[3226]: E0307 01:20:56.337060 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:56.337083 kubelet[3226]: W0307 01:20:56.337081 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:56.337083 kubelet[3226]: E0307 01:20:56.337112 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:56.337683 kubelet[3226]: E0307 01:20:56.337402 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:56.337683 kubelet[3226]: W0307 01:20:56.337417 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:56.337683 kubelet[3226]: E0307 01:20:56.337431 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:56.337846 kubelet[3226]: E0307 01:20:56.337712 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:56.337846 kubelet[3226]: W0307 01:20:56.337723 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:56.337846 kubelet[3226]: E0307 01:20:56.337737 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:56.338102 kubelet[3226]: E0307 01:20:56.338028 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:56.338102 kubelet[3226]: W0307 01:20:56.338039 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:56.338102 kubelet[3226]: E0307 01:20:56.338054 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:56.338351 kubelet[3226]: E0307 01:20:56.338331 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:56.338351 kubelet[3226]: W0307 01:20:56.338342 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:56.338439 kubelet[3226]: E0307 01:20:56.338356 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:56.338693 kubelet[3226]: E0307 01:20:56.338584 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:56.338693 kubelet[3226]: W0307 01:20:56.338603 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:56.338693 kubelet[3226]: E0307 01:20:56.338617 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:56.338973 kubelet[3226]: E0307 01:20:56.338879 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:56.338973 kubelet[3226]: W0307 01:20:56.338890 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:56.338973 kubelet[3226]: E0307 01:20:56.338904 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:56.339240 kubelet[3226]: E0307 01:20:56.339162 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:56.339240 kubelet[3226]: W0307 01:20:56.339173 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:56.339240 kubelet[3226]: E0307 01:20:56.339187 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:56.339568 kubelet[3226]: E0307 01:20:56.339450 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:56.339568 kubelet[3226]: W0307 01:20:56.339463 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:56.339568 kubelet[3226]: E0307 01:20:56.339478 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:56.340192 kubelet[3226]: E0307 01:20:56.340161 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:56.340192 kubelet[3226]: W0307 01:20:56.340182 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:56.340319 kubelet[3226]: E0307 01:20:56.340198 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:56.342198 kubelet[3226]: E0307 01:20:56.340606 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:56.342198 kubelet[3226]: W0307 01:20:56.342133 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:56.342198 kubelet[3226]: E0307 01:20:56.342154 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:56.342594 kubelet[3226]: E0307 01:20:56.342574 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:56.342594 kubelet[3226]: W0307 01:20:56.342592 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:56.342729 kubelet[3226]: E0307 01:20:56.342608 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:56.342967 kubelet[3226]: E0307 01:20:56.342843 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:56.342967 kubelet[3226]: W0307 01:20:56.342857 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:56.342967 kubelet[3226]: E0307 01:20:56.342871 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:56.343250 kubelet[3226]: E0307 01:20:56.343218 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:56.343250 kubelet[3226]: W0307 01:20:56.343234 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:56.343250 kubelet[3226]: E0307 01:20:56.343249 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:56.344061 kubelet[3226]: E0307 01:20:56.344040 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:56.344061 kubelet[3226]: W0307 01:20:56.344060 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:56.344237 kubelet[3226]: E0307 01:20:56.344076 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:56.345342 kubelet[3226]: E0307 01:20:56.345300 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:56.345561 kubelet[3226]: W0307 01:20:56.345317 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:56.345561 kubelet[3226]: E0307 01:20:56.345441 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:56.346197 kubelet[3226]: E0307 01:20:56.345965 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:56.346197 kubelet[3226]: W0307 01:20:56.345980 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:56.346197 kubelet[3226]: E0307 01:20:56.345996 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:56.347421 kubelet[3226]: E0307 01:20:56.347391 3226 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:20:56.347421 kubelet[3226]: W0307 01:20:56.347412 3226 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:20:56.347547 kubelet[3226]: E0307 01:20:56.347428 3226 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:20:56.675594 containerd[1696]: time="2026-03-07T01:20:56.675541752Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:20:56.678284 containerd[1696]: time="2026-03-07T01:20:56.678214291Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 7 01:20:56.681566 containerd[1696]: time="2026-03-07T01:20:56.681511643Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:20:56.686067 containerd[1696]: time="2026-03-07T01:20:56.685898413Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:20:56.687194 containerd[1696]: time="2026-03-07T01:20:56.686586924Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.386781486s" Mar 7 01:20:56.687194 containerd[1696]: time="2026-03-07T01:20:56.686631425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 7 01:20:56.694661 containerd[1696]: time="2026-03-07T01:20:56.694619652Z" level=info msg="CreateContainer within sandbox \"8fa65309ca543c16385945ac070c83f8b5b6fa758c6d1c79477c41eccf429243\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 7 01:20:56.733817 containerd[1696]: time="2026-03-07T01:20:56.733736576Z" level=info msg="CreateContainer within sandbox \"8fa65309ca543c16385945ac070c83f8b5b6fa758c6d1c79477c41eccf429243\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"994f3ee3171a56e3160be22816f2b46c9257bfae456674c1f8dd468c11d6ae99\"" Mar 7 01:20:56.734522 containerd[1696]: time="2026-03-07T01:20:56.734448087Z" level=info msg="StartContainer for \"994f3ee3171a56e3160be22816f2b46c9257bfae456674c1f8dd468c11d6ae99\"" Mar 7 01:20:56.771248 systemd[1]: Started cri-containerd-994f3ee3171a56e3160be22816f2b46c9257bfae456674c1f8dd468c11d6ae99.scope - libcontainer container 994f3ee3171a56e3160be22816f2b46c9257bfae456674c1f8dd468c11d6ae99. Mar 7 01:20:56.800573 containerd[1696]: time="2026-03-07T01:20:56.800472940Z" level=info msg="StartContainer for \"994f3ee3171a56e3160be22816f2b46c9257bfae456674c1f8dd468c11d6ae99\" returns successfully" Mar 7 01:20:56.809843 systemd[1]: cri-containerd-994f3ee3171a56e3160be22816f2b46c9257bfae456674c1f8dd468c11d6ae99.scope: Deactivated successfully. Mar 7 01:20:56.842610 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-994f3ee3171a56e3160be22816f2b46c9257bfae456674c1f8dd468c11d6ae99-rootfs.mount: Deactivated successfully. Mar 7 01:20:58.197921 kubelet[3226]: E0307 01:20:58.197800 3226 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l6gbz" podUID="7ade0512-4109-41d7-94b7-3d8c1b62c155" Mar 7 01:20:58.712082 containerd[1696]: time="2026-03-07T01:20:58.711831517Z" level=info msg="shim disconnected" id=994f3ee3171a56e3160be22816f2b46c9257bfae456674c1f8dd468c11d6ae99 namespace=k8s.io Mar 7 01:20:58.712082 containerd[1696]: time="2026-03-07T01:20:58.711903018Z" level=warning msg="cleaning up after shim disconnected" id=994f3ee3171a56e3160be22816f2b46c9257bfae456674c1f8dd468c11d6ae99 namespace=k8s.io Mar 7 01:20:58.712082 containerd[1696]: time="2026-03-07T01:20:58.711915218Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:20:59.298776 containerd[1696]: time="2026-03-07T01:20:59.298719275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 7 01:21:00.197837 kubelet[3226]: E0307 01:21:00.197777 3226 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l6gbz" podUID="7ade0512-4109-41d7-94b7-3d8c1b62c155" Mar 7 01:21:02.198474 kubelet[3226]: E0307 01:21:02.198077 3226 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l6gbz" podUID="7ade0512-4109-41d7-94b7-3d8c1b62c155" Mar 7 01:21:04.198963 kubelet[3226]: E0307 01:21:04.197899 3226 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l6gbz" podUID="7ade0512-4109-41d7-94b7-3d8c1b62c155" Mar 7 01:21:06.197827 kubelet[3226]: E0307 01:21:06.197765 3226 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l6gbz" podUID="7ade0512-4109-41d7-94b7-3d8c1b62c155" Mar 7 01:21:07.288724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount734187862.mount: Deactivated successfully. Mar 7 01:21:07.331625 containerd[1696]: time="2026-03-07T01:21:07.331565927Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:21:07.335131 containerd[1696]: time="2026-03-07T01:21:07.335064377Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 7 01:21:07.338712 containerd[1696]: time="2026-03-07T01:21:07.338661428Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:21:07.345208 containerd[1696]: time="2026-03-07T01:21:07.345162121Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:21:07.345942 containerd[1696]: time="2026-03-07T01:21:07.345779629Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 8.047016854s" Mar 7 01:21:07.345942 containerd[1696]: time="2026-03-07T01:21:07.345823430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 7 01:21:07.355113 containerd[1696]: time="2026-03-07T01:21:07.354904159Z" level=info msg="CreateContainer within sandbox \"8fa65309ca543c16385945ac070c83f8b5b6fa758c6d1c79477c41eccf429243\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 7 01:21:07.384945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1779479975.mount: Deactivated successfully. Mar 7 01:21:07.394069 containerd[1696]: time="2026-03-07T01:21:07.394023416Z" level=info msg="CreateContainer within sandbox \"8fa65309ca543c16385945ac070c83f8b5b6fa758c6d1c79477c41eccf429243\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"d9dc8456c36368ea69b11b1db2e05eb22a942001c16b18b262e620f6424d60d7\"" Mar 7 01:21:07.395073 containerd[1696]: time="2026-03-07T01:21:07.394996030Z" level=info msg="StartContainer for \"d9dc8456c36368ea69b11b1db2e05eb22a942001c16b18b262e620f6424d60d7\"" Mar 7 01:21:07.434285 systemd[1]: Started cri-containerd-d9dc8456c36368ea69b11b1db2e05eb22a942001c16b18b262e620f6424d60d7.scope - libcontainer container d9dc8456c36368ea69b11b1db2e05eb22a942001c16b18b262e620f6424d60d7. Mar 7 01:21:07.489368 containerd[1696]: time="2026-03-07T01:21:07.489287873Z" level=info msg="StartContainer for \"d9dc8456c36368ea69b11b1db2e05eb22a942001c16b18b262e620f6424d60d7\" returns successfully" Mar 7 01:21:07.507876 systemd[1]: cri-containerd-d9dc8456c36368ea69b11b1db2e05eb22a942001c16b18b262e620f6424d60d7.scope: Deactivated successfully. Mar 7 01:21:08.198427 kubelet[3226]: E0307 01:21:08.197243 3226 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l6gbz" podUID="7ade0512-4109-41d7-94b7-3d8c1b62c155" Mar 7 01:21:08.289072 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9dc8456c36368ea69b11b1db2e05eb22a942001c16b18b262e620f6424d60d7-rootfs.mount: Deactivated successfully. Mar 7 01:21:10.197584 kubelet[3226]: E0307 01:21:10.197521 3226 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l6gbz" podUID="7ade0512-4109-41d7-94b7-3d8c1b62c155" Mar 7 01:21:10.801364 containerd[1696]: time="2026-03-07T01:21:10.801286726Z" level=info msg="shim disconnected" id=d9dc8456c36368ea69b11b1db2e05eb22a942001c16b18b262e620f6424d60d7 namespace=k8s.io Mar 7 01:21:10.801364 containerd[1696]: time="2026-03-07T01:21:10.801356327Z" level=warning msg="cleaning up after shim disconnected" id=d9dc8456c36368ea69b11b1db2e05eb22a942001c16b18b262e620f6424d60d7 namespace=k8s.io Mar 7 01:21:10.801364 containerd[1696]: time="2026-03-07T01:21:10.801368827Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:21:11.329582 containerd[1696]: time="2026-03-07T01:21:11.329034740Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 7 01:21:12.198192 kubelet[3226]: E0307 01:21:12.198071 3226 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l6gbz" podUID="7ade0512-4109-41d7-94b7-3d8c1b62c155" Mar 7 01:21:14.197252 kubelet[3226]: E0307 01:21:14.197193 3226 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l6gbz" podUID="7ade0512-4109-41d7-94b7-3d8c1b62c155" Mar 7 01:21:15.237727 containerd[1696]: time="2026-03-07T01:21:15.237668542Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:21:15.241066 containerd[1696]: time="2026-03-07T01:21:15.240978389Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 7 01:21:15.243993 containerd[1696]: time="2026-03-07T01:21:15.243914630Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:21:15.248171 containerd[1696]: time="2026-03-07T01:21:15.247854986Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:21:15.249008 containerd[1696]: time="2026-03-07T01:21:15.248854500Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 3.919753958s" Mar 7 01:21:15.249008 containerd[1696]: time="2026-03-07T01:21:15.248891000Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 7 01:21:15.256482 containerd[1696]: time="2026-03-07T01:21:15.256381206Z" level=info msg="CreateContainer within sandbox \"8fa65309ca543c16385945ac070c83f8b5b6fa758c6d1c79477c41eccf429243\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 7 01:21:15.291880 containerd[1696]: time="2026-03-07T01:21:15.291824706Z" level=info msg="CreateContainer within sandbox \"8fa65309ca543c16385945ac070c83f8b5b6fa758c6d1c79477c41eccf429243\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b8c87c65b6d263cf268dfc480efe75a43e91f6ccae752b59cdaeb0a3670245cd\"" Mar 7 01:21:15.292655 containerd[1696]: time="2026-03-07T01:21:15.292552616Z" level=info msg="StartContainer for \"b8c87c65b6d263cf268dfc480efe75a43e91f6ccae752b59cdaeb0a3670245cd\"" Mar 7 01:21:15.332275 systemd[1]: Started cri-containerd-b8c87c65b6d263cf268dfc480efe75a43e91f6ccae752b59cdaeb0a3670245cd.scope - libcontainer container b8c87c65b6d263cf268dfc480efe75a43e91f6ccae752b59cdaeb0a3670245cd. Mar 7 01:21:15.366429 containerd[1696]: time="2026-03-07T01:21:15.366043652Z" level=info msg="StartContainer for \"b8c87c65b6d263cf268dfc480efe75a43e91f6ccae752b59cdaeb0a3670245cd\" returns successfully" Mar 7 01:21:16.197767 kubelet[3226]: E0307 01:21:16.197385 3226 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l6gbz" podUID="7ade0512-4109-41d7-94b7-3d8c1b62c155" Mar 7 01:21:17.029299 containerd[1696]: time="2026-03-07T01:21:17.029248305Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 01:21:17.031405 systemd[1]: cri-containerd-b8c87c65b6d263cf268dfc480efe75a43e91f6ccae752b59cdaeb0a3670245cd.scope: Deactivated successfully. Mar 7 01:21:17.054544 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b8c87c65b6d263cf268dfc480efe75a43e91f6ccae752b59cdaeb0a3670245cd-rootfs.mount: Deactivated successfully. Mar 7 01:21:17.108596 kubelet[3226]: I0307 01:21:17.108563 3226 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 7 01:21:18.300591 containerd[1696]: time="2026-03-07T01:21:18.300184427Z" level=info msg="shim disconnected" id=b8c87c65b6d263cf268dfc480efe75a43e91f6ccae752b59cdaeb0a3670245cd namespace=k8s.io Mar 7 01:21:18.300591 containerd[1696]: time="2026-03-07T01:21:18.300253828Z" level=warning msg="cleaning up after shim disconnected" id=b8c87c65b6d263cf268dfc480efe75a43e91f6ccae752b59cdaeb0a3670245cd namespace=k8s.io Mar 7 01:21:18.300591 containerd[1696]: time="2026-03-07T01:21:18.300265528Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:21:18.316872 systemd[1]: Created slice kubepods-burstable-pod20284904_5e50_41d2_bb96_a70478b6f953.slice - libcontainer container kubepods-burstable-pod20284904_5e50_41d2_bb96_a70478b6f953.slice. Mar 7 01:21:18.357057 systemd[1]: Created slice kubepods-besteffort-pod7ade0512_4109_41d7_94b7_3d8c1b62c155.slice - libcontainer container kubepods-besteffort-pod7ade0512_4109_41d7_94b7_3d8c1b62c155.slice. Mar 7 01:21:18.366338 systemd[1]: Created slice kubepods-burstable-pod93fc7b2a_d643_4053_92d9_634e00359c70.slice - libcontainer container kubepods-burstable-pod93fc7b2a_d643_4053_92d9_634e00359c70.slice. Mar 7 01:21:18.370979 containerd[1696]: time="2026-03-07T01:21:18.370675021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l6gbz,Uid:7ade0512-4109-41d7-94b7-3d8c1b62c155,Namespace:calico-system,Attempt:0,}" Mar 7 01:21:18.384314 systemd[1]: Created slice kubepods-besteffort-pod91b8f873_2f68_41cb_bf26_c02fe2af4478.slice - libcontainer container kubepods-besteffort-pod91b8f873_2f68_41cb_bf26_c02fe2af4478.slice. Mar 7 01:21:18.397569 kubelet[3226]: I0307 01:21:18.397521 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0e08f188-ee09-49c4-9a65-26ca23d44a3a-calico-apiserver-certs\") pod \"calico-apiserver-6b8468fddd-6rqlm\" (UID: \"0e08f188-ee09-49c4-9a65-26ca23d44a3a\") " pod="calico-system/calico-apiserver-6b8468fddd-6rqlm" Mar 7 01:21:18.406792 kubelet[3226]: I0307 01:21:18.405397 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20284904-5e50-41d2-bb96-a70478b6f953-config-volume\") pod \"coredns-7d764666f9-zrdvd\" (UID: \"20284904-5e50-41d2-bb96-a70478b6f953\") " pod="kube-system/coredns-7d764666f9-zrdvd" Mar 7 01:21:18.406792 kubelet[3226]: I0307 01:21:18.405441 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjwtp\" (UniqueName: \"kubernetes.io/projected/9aeeed72-eae0-49fc-86f0-39c207a04179-kube-api-access-jjwtp\") pod \"whisker-65d4c8b8c7-bfgrz\" (UID: \"9aeeed72-eae0-49fc-86f0-39c207a04179\") " pod="calico-system/whisker-65d4c8b8c7-bfgrz" Mar 7 01:21:18.406792 kubelet[3226]: I0307 01:21:18.405489 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gckqd\" (UniqueName: \"kubernetes.io/projected/20284904-5e50-41d2-bb96-a70478b6f953-kube-api-access-gckqd\") pod \"coredns-7d764666f9-zrdvd\" (UID: \"20284904-5e50-41d2-bb96-a70478b6f953\") " pod="kube-system/coredns-7d764666f9-zrdvd" Mar 7 01:21:18.406792 kubelet[3226]: I0307 01:21:18.405537 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93fc7b2a-d643-4053-92d9-634e00359c70-config-volume\") pod \"coredns-7d764666f9-6wzp5\" (UID: \"93fc7b2a-d643-4053-92d9-634e00359c70\") " pod="kube-system/coredns-7d764666f9-6wzp5" Mar 7 01:21:18.406792 kubelet[3226]: I0307 01:21:18.405559 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9aeeed72-eae0-49fc-86f0-39c207a04179-whisker-backend-key-pair\") pod \"whisker-65d4c8b8c7-bfgrz\" (UID: \"9aeeed72-eae0-49fc-86f0-39c207a04179\") " pod="calico-system/whisker-65d4c8b8c7-bfgrz" Mar 7 01:21:18.407761 kubelet[3226]: I0307 01:21:18.405611 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91b8f873-2f68-41cb-bf26-c02fe2af4478-tigera-ca-bundle\") pod \"calico-kube-controllers-8685cf67d4-nwltl\" (UID: \"91b8f873-2f68-41cb-bf26-c02fe2af4478\") " pod="calico-system/calico-kube-controllers-8685cf67d4-nwltl" Mar 7 01:21:18.407761 kubelet[3226]: I0307 01:21:18.405701 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggpbq\" (UniqueName: \"kubernetes.io/projected/0e08f188-ee09-49c4-9a65-26ca23d44a3a-kube-api-access-ggpbq\") pod \"calico-apiserver-6b8468fddd-6rqlm\" (UID: \"0e08f188-ee09-49c4-9a65-26ca23d44a3a\") " pod="calico-system/calico-apiserver-6b8468fddd-6rqlm" Mar 7 01:21:18.407761 kubelet[3226]: I0307 01:21:18.405722 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aeeed72-eae0-49fc-86f0-39c207a04179-whisker-ca-bundle\") pod \"whisker-65d4c8b8c7-bfgrz\" (UID: \"9aeeed72-eae0-49fc-86f0-39c207a04179\") " pod="calico-system/whisker-65d4c8b8c7-bfgrz" Mar 7 01:21:18.407761 kubelet[3226]: I0307 01:21:18.405758 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlx64\" (UniqueName: \"kubernetes.io/projected/91b8f873-2f68-41cb-bf26-c02fe2af4478-kube-api-access-xlx64\") pod \"calico-kube-controllers-8685cf67d4-nwltl\" (UID: \"91b8f873-2f68-41cb-bf26-c02fe2af4478\") " pod="calico-system/calico-kube-controllers-8685cf67d4-nwltl" Mar 7 01:21:18.407761 kubelet[3226]: I0307 01:21:18.405783 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c7e196a9-b819-4eb7-9c05-9cc149a1f20e-goldmane-key-pair\") pod \"goldmane-9f7667bb8-5bpvw\" (UID: \"c7e196a9-b819-4eb7-9c05-9cc149a1f20e\") " pod="calico-system/goldmane-9f7667bb8-5bpvw" Mar 7 01:21:18.408686 kubelet[3226]: I0307 01:21:18.405849 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/9aeeed72-eae0-49fc-86f0-39c207a04179-nginx-config\") pod \"whisker-65d4c8b8c7-bfgrz\" (UID: \"9aeeed72-eae0-49fc-86f0-39c207a04179\") " pod="calico-system/whisker-65d4c8b8c7-bfgrz" Mar 7 01:21:18.408686 kubelet[3226]: I0307 01:21:18.405872 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zjz4\" (UniqueName: \"kubernetes.io/projected/50aad275-e5a5-4169-bc0a-89669a369738-kube-api-access-6zjz4\") pod \"calico-apiserver-6b8468fddd-sh56r\" (UID: \"50aad275-e5a5-4169-bc0a-89669a369738\") " pod="calico-system/calico-apiserver-6b8468fddd-sh56r" Mar 7 01:21:18.408686 kubelet[3226]: I0307 01:21:18.405895 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7e196a9-b819-4eb7-9c05-9cc149a1f20e-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-5bpvw\" (UID: \"c7e196a9-b819-4eb7-9c05-9cc149a1f20e\") " pod="calico-system/goldmane-9f7667bb8-5bpvw" Mar 7 01:21:18.408686 kubelet[3226]: I0307 01:21:18.405934 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhfmc\" (UniqueName: \"kubernetes.io/projected/c7e196a9-b819-4eb7-9c05-9cc149a1f20e-kube-api-access-vhfmc\") pod \"goldmane-9f7667bb8-5bpvw\" (UID: \"c7e196a9-b819-4eb7-9c05-9cc149a1f20e\") " pod="calico-system/goldmane-9f7667bb8-5bpvw" Mar 7 01:21:18.408686 kubelet[3226]: I0307 01:21:18.406025 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4q8q\" (UniqueName: \"kubernetes.io/projected/93fc7b2a-d643-4053-92d9-634e00359c70-kube-api-access-k4q8q\") pod \"coredns-7d764666f9-6wzp5\" (UID: \"93fc7b2a-d643-4053-92d9-634e00359c70\") " pod="kube-system/coredns-7d764666f9-6wzp5" Mar 7 01:21:18.408897 containerd[1696]: time="2026-03-07T01:21:18.408181949Z" level=info msg="CreateContainer within sandbox \"8fa65309ca543c16385945ac070c83f8b5b6fa758c6d1c79477c41eccf429243\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 7 01:21:18.408957 kubelet[3226]: I0307 01:21:18.406056 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/50aad275-e5a5-4169-bc0a-89669a369738-calico-apiserver-certs\") pod \"calico-apiserver-6b8468fddd-sh56r\" (UID: \"50aad275-e5a5-4169-bc0a-89669a369738\") " pod="calico-system/calico-apiserver-6b8468fddd-sh56r" Mar 7 01:21:18.408957 kubelet[3226]: I0307 01:21:18.406109 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7e196a9-b819-4eb7-9c05-9cc149a1f20e-config\") pod \"goldmane-9f7667bb8-5bpvw\" (UID: \"c7e196a9-b819-4eb7-9c05-9cc149a1f20e\") " pod="calico-system/goldmane-9f7667bb8-5bpvw" Mar 7 01:21:18.433884 systemd[1]: Created slice kubepods-besteffort-podc7e196a9_b819_4eb7_9c05_9cc149a1f20e.slice - libcontainer container kubepods-besteffort-podc7e196a9_b819_4eb7_9c05_9cc149a1f20e.slice. Mar 7 01:21:18.459840 systemd[1]: Created slice kubepods-besteffort-pod0e08f188_ee09_49c4_9a65_26ca23d44a3a.slice - libcontainer container kubepods-besteffort-pod0e08f188_ee09_49c4_9a65_26ca23d44a3a.slice. Mar 7 01:21:18.482404 containerd[1696]: time="2026-03-07T01:21:18.481749987Z" level=info msg="CreateContainer within sandbox \"8fa65309ca543c16385945ac070c83f8b5b6fa758c6d1c79477c41eccf429243\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"dd8caea8b37c919a9957e40dadfe1e9f1cd3565a693def6eb3c019ef00e4aabb\"" Mar 7 01:21:18.483934 systemd[1]: Created slice kubepods-besteffort-pod50aad275_e5a5_4169_bc0a_89669a369738.slice - libcontainer container kubepods-besteffort-pod50aad275_e5a5_4169_bc0a_89669a369738.slice. Mar 7 01:21:18.487592 containerd[1696]: time="2026-03-07T01:21:18.487055262Z" level=info msg="StartContainer for \"dd8caea8b37c919a9957e40dadfe1e9f1cd3565a693def6eb3c019ef00e4aabb\"" Mar 7 01:21:18.504751 systemd[1]: Created slice kubepods-besteffort-pod9aeeed72_eae0_49fc_86f0_39c207a04179.slice - libcontainer container kubepods-besteffort-pod9aeeed72_eae0_49fc_86f0_39c207a04179.slice. Mar 7 01:21:18.592391 systemd[1]: Started cri-containerd-dd8caea8b37c919a9957e40dadfe1e9f1cd3565a693def6eb3c019ef00e4aabb.scope - libcontainer container dd8caea8b37c919a9957e40dadfe1e9f1cd3565a693def6eb3c019ef00e4aabb. Mar 7 01:21:18.619968 containerd[1696]: time="2026-03-07T01:21:18.619918735Z" level=error msg="Failed to destroy network for sandbox \"faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:21:18.620419 containerd[1696]: time="2026-03-07T01:21:18.620359741Z" level=error msg="encountered an error cleaning up failed sandbox \"faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:21:18.620528 containerd[1696]: time="2026-03-07T01:21:18.620446343Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l6gbz,Uid:7ade0512-4109-41d7-94b7-3d8c1b62c155,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:21:18.620743 kubelet[3226]: E0307 01:21:18.620705 3226 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:21:18.620837 kubelet[3226]: E0307 01:21:18.620779 3226 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l6gbz" Mar 7 01:21:18.620837 kubelet[3226]: E0307 01:21:18.620804 3226 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l6gbz" Mar 7 01:21:18.620922 kubelet[3226]: E0307 01:21:18.620873 3226 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l6gbz_calico-system(7ade0512-4109-41d7-94b7-3d8c1b62c155)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l6gbz_calico-system(7ade0512-4109-41d7-94b7-3d8c1b62c155)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l6gbz" podUID="7ade0512-4109-41d7-94b7-3d8c1b62c155" Mar 7 01:21:18.648495 containerd[1696]: time="2026-03-07T01:21:18.647974931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-zrdvd,Uid:20284904-5e50-41d2-bb96-a70478b6f953,Namespace:kube-system,Attempt:0,}" Mar 7 01:21:18.649153 containerd[1696]: time="2026-03-07T01:21:18.648969545Z" level=info msg="StartContainer for \"dd8caea8b37c919a9957e40dadfe1e9f1cd3565a693def6eb3c019ef00e4aabb\" returns successfully" Mar 7 01:21:18.680531 containerd[1696]: time="2026-03-07T01:21:18.680088984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-6wzp5,Uid:93fc7b2a-d643-4053-92d9-634e00359c70,Namespace:kube-system,Attempt:0,}" Mar 7 01:21:18.715488 containerd[1696]: time="2026-03-07T01:21:18.715440182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8685cf67d4-nwltl,Uid:91b8f873-2f68-41cb-bf26-c02fe2af4478,Namespace:calico-system,Attempt:0,}" Mar 7 01:21:18.750034 containerd[1696]: time="2026-03-07T01:21:18.749974669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-5bpvw,Uid:c7e196a9-b819-4eb7-9c05-9cc149a1f20e,Namespace:calico-system,Attempt:0,}" Mar 7 01:21:18.788849 containerd[1696]: time="2026-03-07T01:21:18.788478312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b8468fddd-6rqlm,Uid:0e08f188-ee09-49c4-9a65-26ca23d44a3a,Namespace:calico-system,Attempt:0,}" Mar 7 01:21:18.806887 containerd[1696]: time="2026-03-07T01:21:18.804159333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b8468fddd-sh56r,Uid:50aad275-e5a5-4169-bc0a-89669a369738,Namespace:calico-system,Attempt:0,}" Mar 7 01:21:18.858150 containerd[1696]: time="2026-03-07T01:21:18.857811990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65d4c8b8c7-bfgrz,Uid:9aeeed72-eae0-49fc-86f0-39c207a04179,Namespace:calico-system,Attempt:0,}" Mar 7 01:21:19.240524 containerd[1696]: 2026-03-07 01:21:18.929 [INFO][4194] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="db0e1a5e27d141eb95a3a8ecffbe0556edbaa101b149bcf9f82affff67fd2b19" Mar 7 01:21:19.240524 containerd[1696]: 2026-03-07 01:21:18.930 [INFO][4194] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="db0e1a5e27d141eb95a3a8ecffbe0556edbaa101b149bcf9f82affff67fd2b19" iface="eth0" netns="/var/run/netns/cni-feaca858-e535-61c6-a6e3-f83f2c531c2b" Mar 7 01:21:19.240524 containerd[1696]: 2026-03-07 01:21:18.930 [INFO][4194] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="db0e1a5e27d141eb95a3a8ecffbe0556edbaa101b149bcf9f82affff67fd2b19" iface="eth0" netns="/var/run/netns/cni-feaca858-e535-61c6-a6e3-f83f2c531c2b" Mar 7 01:21:19.240524 containerd[1696]: 2026-03-07 01:21:18.931 [INFO][4194] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="db0e1a5e27d141eb95a3a8ecffbe0556edbaa101b149bcf9f82affff67fd2b19" iface="eth0" netns="/var/run/netns/cni-feaca858-e535-61c6-a6e3-f83f2c531c2b" Mar 7 01:21:19.240524 containerd[1696]: 2026-03-07 01:21:18.931 [INFO][4194] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="db0e1a5e27d141eb95a3a8ecffbe0556edbaa101b149bcf9f82affff67fd2b19" Mar 7 01:21:19.240524 containerd[1696]: 2026-03-07 01:21:18.931 [INFO][4194] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="db0e1a5e27d141eb95a3a8ecffbe0556edbaa101b149bcf9f82affff67fd2b19" Mar 7 01:21:19.240524 containerd[1696]: 2026-03-07 01:21:19.193 [INFO][4248] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="db0e1a5e27d141eb95a3a8ecffbe0556edbaa101b149bcf9f82affff67fd2b19" HandleID="k8s-pod-network.db0e1a5e27d141eb95a3a8ecffbe0556edbaa101b149bcf9f82affff67fd2b19" Workload="ci--4081.3.6--n--8271a56a8b-k8s-coredns--7d764666f9--zrdvd-eth0" Mar 7 01:21:19.240524 containerd[1696]: 2026-03-07 01:21:19.194 [INFO][4248] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:21:19.240524 containerd[1696]: 2026-03-07 01:21:19.194 [INFO][4248] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:21:19.240524 containerd[1696]: 2026-03-07 01:21:19.220 [WARNING][4248] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="db0e1a5e27d141eb95a3a8ecffbe0556edbaa101b149bcf9f82affff67fd2b19" HandleID="k8s-pod-network.db0e1a5e27d141eb95a3a8ecffbe0556edbaa101b149bcf9f82affff67fd2b19" Workload="ci--4081.3.6--n--8271a56a8b-k8s-coredns--7d764666f9--zrdvd-eth0" Mar 7 01:21:19.240524 containerd[1696]: 2026-03-07 01:21:19.220 [INFO][4248] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="db0e1a5e27d141eb95a3a8ecffbe0556edbaa101b149bcf9f82affff67fd2b19" HandleID="k8s-pod-network.db0e1a5e27d141eb95a3a8ecffbe0556edbaa101b149bcf9f82affff67fd2b19" Workload="ci--4081.3.6--n--8271a56a8b-k8s-coredns--7d764666f9--zrdvd-eth0" Mar 7 01:21:19.240524 containerd[1696]: 2026-03-07 01:21:19.222 [INFO][4248] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:21:19.240524 containerd[1696]: 2026-03-07 01:21:19.230 [INFO][4194] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="db0e1a5e27d141eb95a3a8ecffbe0556edbaa101b149bcf9f82affff67fd2b19" Mar 7 01:21:19.267855 containerd[1696]: time="2026-03-07T01:21:19.267754170Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-zrdvd,Uid:20284904-5e50-41d2-bb96-a70478b6f953,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"db0e1a5e27d141eb95a3a8ecffbe0556edbaa101b149bcf9f82affff67fd2b19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:21:19.270431 kubelet[3226]: E0307 01:21:19.268379 3226 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db0e1a5e27d141eb95a3a8ecffbe0556edbaa101b149bcf9f82affff67fd2b19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:21:19.270431 kubelet[3226]: E0307 01:21:19.269976 3226 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db0e1a5e27d141eb95a3a8ecffbe0556edbaa101b149bcf9f82affff67fd2b19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-zrdvd" Mar 7 01:21:19.270431 kubelet[3226]: E0307 01:21:19.270008 3226 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db0e1a5e27d141eb95a3a8ecffbe0556edbaa101b149bcf9f82affff67fd2b19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-zrdvd" Mar 7 01:21:19.270886 kubelet[3226]: E0307 01:21:19.270352 3226 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-zrdvd_kube-system(20284904-5e50-41d2-bb96-a70478b6f953)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-zrdvd_kube-system(20284904-5e50-41d2-bb96-a70478b6f953)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"db0e1a5e27d141eb95a3a8ecffbe0556edbaa101b149bcf9f82affff67fd2b19\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-zrdvd" podUID="20284904-5e50-41d2-bb96-a70478b6f953" Mar 7 01:21:19.363073 systemd-networkd[1329]: cali479b4be1432: Link UP Mar 7 01:21:19.363375 systemd-networkd[1329]: cali479b4be1432: Gained carrier Mar 7 01:21:19.405899 kubelet[3226]: I0307 01:21:19.405863 3226 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e" Mar 7 01:21:19.410592 containerd[1696]: time="2026-03-07T01:21:19.410513783Z" level=info msg="StopPodSandbox for \"faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e\"" Mar 7 01:21:19.417793 containerd[1696]: time="2026-03-07T01:21:19.417718985Z" level=info msg="Ensure that sandbox faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e in task-service has been cleanup successfully" Mar 7 01:21:19.427743 containerd[1696]: time="2026-03-07T01:21:19.418875001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-zrdvd,Uid:20284904-5e50-41d2-bb96-a70478b6f953,Namespace:kube-system,Attempt:0,}" Mar 7 01:21:19.432119 containerd[1696]: 2026-03-07 01:21:19.043 [ERROR][4226] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:21:19.432119 containerd[1696]: 2026-03-07 01:21:19.076 [INFO][4226] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--8271a56a8b-k8s-calico--apiserver--6b8468fddd--sh56r-eth0 calico-apiserver-6b8468fddd- calico-system 50aad275-e5a5-4169-bc0a-89669a369738 862 0 2026-03-07 01:20:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b8468fddd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-8271a56a8b calico-apiserver-6b8468fddd-sh56r eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali479b4be1432 [] [] }} ContainerID="6853aeca27dc874d77fca6bfe46eb51796cbe4d9ab9adec93e72b6070d44d0c8" Namespace="calico-system" Pod="calico-apiserver-6b8468fddd-sh56r" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-calico--apiserver--6b8468fddd--sh56r-" Mar 7 01:21:19.432119 containerd[1696]: 2026-03-07 01:21:19.076 [INFO][4226] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6853aeca27dc874d77fca6bfe46eb51796cbe4d9ab9adec93e72b6070d44d0c8" Namespace="calico-system" Pod="calico-apiserver-6b8468fddd-sh56r" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-calico--apiserver--6b8468fddd--sh56r-eth0" Mar 7 01:21:19.432119 containerd[1696]: 2026-03-07 01:21:19.186 [INFO][4286] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6853aeca27dc874d77fca6bfe46eb51796cbe4d9ab9adec93e72b6070d44d0c8" HandleID="k8s-pod-network.6853aeca27dc874d77fca6bfe46eb51796cbe4d9ab9adec93e72b6070d44d0c8" Workload="ci--4081.3.6--n--8271a56a8b-k8s-calico--apiserver--6b8468fddd--sh56r-eth0" Mar 7 01:21:19.432119 containerd[1696]: 2026-03-07 01:21:19.219 [INFO][4286] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6853aeca27dc874d77fca6bfe46eb51796cbe4d9ab9adec93e72b6070d44d0c8" HandleID="k8s-pod-network.6853aeca27dc874d77fca6bfe46eb51796cbe4d9ab9adec93e72b6070d44d0c8" Workload="ci--4081.3.6--n--8271a56a8b-k8s-calico--apiserver--6b8468fddd--sh56r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003eb1b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-8271a56a8b", "pod":"calico-apiserver-6b8468fddd-sh56r", "timestamp":"2026-03-07 01:21:19.186000617 +0000 UTC"}, Hostname:"ci-4081.3.6-n-8271a56a8b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001134a0)} Mar 7 01:21:19.432119 containerd[1696]: 2026-03-07 01:21:19.220 [INFO][4286] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:21:19.432119 containerd[1696]: 2026-03-07 01:21:19.222 [INFO][4286] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:21:19.432119 containerd[1696]: 2026-03-07 01:21:19.222 [INFO][4286] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-8271a56a8b' Mar 7 01:21:19.432119 containerd[1696]: 2026-03-07 01:21:19.234 [INFO][4286] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6853aeca27dc874d77fca6bfe46eb51796cbe4d9ab9adec93e72b6070d44d0c8" host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:19.432119 containerd[1696]: 2026-03-07 01:21:19.247 [INFO][4286] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:19.432119 containerd[1696]: 2026-03-07 01:21:19.271 [INFO][4286] ipam/ipam.go 558: Ran out of existing affine blocks for host host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:19.432119 containerd[1696]: 2026-03-07 01:21:19.284 [INFO][4286] ipam/ipam.go 575: Tried all affine blocks. Looking for an affine block with space, or a new unclaimed block host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:19.432119 containerd[1696]: 2026-03-07 01:21:19.290 [INFO][4286] ipam/ipam_block_reader_writer.go 158: Found free block: 192.168.45.64/26 Mar 7 01:21:19.432119 containerd[1696]: 2026-03-07 01:21:19.290 [INFO][4286] ipam/ipam.go 588: Found unclaimed block in 3.979956ms host="ci-4081.3.6-n-8271a56a8b" subnet=192.168.45.64/26 Mar 7 01:21:19.432119 containerd[1696]: 2026-03-07 01:21:19.290 [INFO][4286] ipam/ipam_block_reader_writer.go 175: Trying to create affinity in pending state host="ci-4081.3.6-n-8271a56a8b" subnet=192.168.45.64/26 Mar 7 01:21:19.432119 containerd[1696]: 2026-03-07 01:21:19.297 [INFO][4286] ipam/ipam_block_reader_writer.go 205: Successfully created pending affinity for block host="ci-4081.3.6-n-8271a56a8b" subnet=192.168.45.64/26 Mar 7 01:21:19.432119 containerd[1696]: 2026-03-07 01:21:19.297 [INFO][4286] ipam/ipam.go 160: Attempting to load block cidr=192.168.45.64/26 host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:19.432119 containerd[1696]: 2026-03-07 01:21:19.303 [INFO][4286] ipam/ipam.go 165: The referenced block doesn't exist, trying to create it cidr=192.168.45.64/26 host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:19.432119 containerd[1696]: 2026-03-07 01:21:19.308 [INFO][4286] ipam/ipam.go 172: Wrote affinity as pending cidr=192.168.45.64/26 host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:19.432119 containerd[1696]: 2026-03-07 01:21:19.310 [INFO][4286] ipam/ipam.go 181: Attempting to claim the block cidr=192.168.45.64/26 host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:19.432119 containerd[1696]: 2026-03-07 01:21:19.310 [INFO][4286] ipam/ipam_block_reader_writer.go 226: Attempting to create a new block affinityType="host" host="ci-4081.3.6-n-8271a56a8b" subnet=192.168.45.64/26 Mar 7 01:21:19.432119 containerd[1696]: 2026-03-07 01:21:19.314 [INFO][4286] ipam/ipam_block_reader_writer.go 267: Successfully created block Mar 7 01:21:19.432119 containerd[1696]: 2026-03-07 01:21:19.314 [INFO][4286] ipam/ipam_block_reader_writer.go 283: Confirming affinity host="ci-4081.3.6-n-8271a56a8b" subnet=192.168.45.64/26 Mar 7 01:21:19.432119 containerd[1696]: 2026-03-07 01:21:19.322 [INFO][4286] ipam/ipam_block_reader_writer.go 298: Successfully confirmed affinity host="ci-4081.3.6-n-8271a56a8b" subnet=192.168.45.64/26 Mar 7 01:21:19.432119 containerd[1696]: 2026-03-07 01:21:19.322 [INFO][4286] ipam/ipam.go 623: Block '192.168.45.64/26' has 64 free ips which is more than 1 ips required. host="ci-4081.3.6-n-8271a56a8b" subnet=192.168.45.64/26 Mar 7 01:21:19.432119 containerd[1696]: 2026-03-07 01:21:19.322 [INFO][4286] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.45.64/26 handle="k8s-pod-network.6853aeca27dc874d77fca6bfe46eb51796cbe4d9ab9adec93e72b6070d44d0c8" host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:19.432119 containerd[1696]: 2026-03-07 01:21:19.323 [INFO][4286] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6853aeca27dc874d77fca6bfe46eb51796cbe4d9ab9adec93e72b6070d44d0c8 Mar 7 01:21:19.437610 containerd[1696]: 2026-03-07 01:21:19.327 [INFO][4286] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.45.64/26 handle="k8s-pod-network.6853aeca27dc874d77fca6bfe46eb51796cbe4d9ab9adec93e72b6070d44d0c8" host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:19.437610 containerd[1696]: 2026-03-07 01:21:19.336 [INFO][4286] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.45.64/26] block=192.168.45.64/26 handle="k8s-pod-network.6853aeca27dc874d77fca6bfe46eb51796cbe4d9ab9adec93e72b6070d44d0c8" host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:19.437610 containerd[1696]: 2026-03-07 01:21:19.336 [INFO][4286] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.45.64/26] handle="k8s-pod-network.6853aeca27dc874d77fca6bfe46eb51796cbe4d9ab9adec93e72b6070d44d0c8" host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:19.437610 containerd[1696]: 2026-03-07 01:21:19.336 [INFO][4286] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:21:19.437610 containerd[1696]: 2026-03-07 01:21:19.336 [INFO][4286] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.45.64/26] IPv6=[] ContainerID="6853aeca27dc874d77fca6bfe46eb51796cbe4d9ab9adec93e72b6070d44d0c8" HandleID="k8s-pod-network.6853aeca27dc874d77fca6bfe46eb51796cbe4d9ab9adec93e72b6070d44d0c8" Workload="ci--4081.3.6--n--8271a56a8b-k8s-calico--apiserver--6b8468fddd--sh56r-eth0" Mar 7 01:21:19.437610 containerd[1696]: 2026-03-07 01:21:19.338 [INFO][4226] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6853aeca27dc874d77fca6bfe46eb51796cbe4d9ab9adec93e72b6070d44d0c8" Namespace="calico-system" Pod="calico-apiserver-6b8468fddd-sh56r" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-calico--apiserver--6b8468fddd--sh56r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8271a56a8b-k8s-calico--apiserver--6b8468fddd--sh56r-eth0", GenerateName:"calico-apiserver-6b8468fddd-", Namespace:"calico-system", SelfLink:"", UID:"50aad275-e5a5-4169-bc0a-89669a369738", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 20, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b8468fddd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8271a56a8b", ContainerID:"", Pod:"calico-apiserver-6b8468fddd-sh56r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.45.64/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali479b4be1432", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:21:19.437610 containerd[1696]: 2026-03-07 01:21:19.338 [INFO][4226] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.64/32] ContainerID="6853aeca27dc874d77fca6bfe46eb51796cbe4d9ab9adec93e72b6070d44d0c8" Namespace="calico-system" Pod="calico-apiserver-6b8468fddd-sh56r" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-calico--apiserver--6b8468fddd--sh56r-eth0" Mar 7 01:21:19.437610 containerd[1696]: 2026-03-07 01:21:19.338 [INFO][4226] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali479b4be1432 ContainerID="6853aeca27dc874d77fca6bfe46eb51796cbe4d9ab9adec93e72b6070d44d0c8" Namespace="calico-system" Pod="calico-apiserver-6b8468fddd-sh56r" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-calico--apiserver--6b8468fddd--sh56r-eth0" Mar 7 01:21:19.437610 containerd[1696]: 2026-03-07 01:21:19.362 [INFO][4226] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6853aeca27dc874d77fca6bfe46eb51796cbe4d9ab9adec93e72b6070d44d0c8" Namespace="calico-system" Pod="calico-apiserver-6b8468fddd-sh56r" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-calico--apiserver--6b8468fddd--sh56r-eth0" Mar 7 01:21:19.437610 containerd[1696]: 2026-03-07 01:21:19.363 [INFO][4226] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6853aeca27dc874d77fca6bfe46eb51796cbe4d9ab9adec93e72b6070d44d0c8" Namespace="calico-system" Pod="calico-apiserver-6b8468fddd-sh56r" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-calico--apiserver--6b8468fddd--sh56r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8271a56a8b-k8s-calico--apiserver--6b8468fddd--sh56r-eth0", GenerateName:"calico-apiserver-6b8468fddd-", Namespace:"calico-system", SelfLink:"", UID:"50aad275-e5a5-4169-bc0a-89669a369738", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 20, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b8468fddd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8271a56a8b", ContainerID:"6853aeca27dc874d77fca6bfe46eb51796cbe4d9ab9adec93e72b6070d44d0c8", Pod:"calico-apiserver-6b8468fddd-sh56r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.45.64/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali479b4be1432", MAC:"0a:22:04:5b:91:5b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:21:19.439623 containerd[1696]: 2026-03-07 01:21:19.399 [INFO][4226] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6853aeca27dc874d77fca6bfe46eb51796cbe4d9ab9adec93e72b6070d44d0c8" Namespace="calico-system" Pod="calico-apiserver-6b8468fddd-sh56r" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-calico--apiserver--6b8468fddd--sh56r-eth0" Mar 7 01:21:19.442687 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e-shm.mount: Deactivated successfully. Mar 7 01:21:19.470147 kubelet[3226]: I0307 01:21:19.468168 3226 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-fwssz" podStartSLOduration=1.9084969059999999 podStartE2EDuration="27.467917893s" podCreationTimestamp="2026-03-07 01:20:52 +0000 UTC" firstStartedPulling="2026-03-07 01:20:52.802858815 +0000 UTC m=+15.731111177" lastFinishedPulling="2026-03-07 01:21:18.362279802 +0000 UTC m=+41.290532164" observedRunningTime="2026-03-07 01:21:19.46562186 +0000 UTC m=+42.393874222" watchObservedRunningTime="2026-03-07 01:21:19.467917893 +0000 UTC m=+42.396170255" Mar 7 01:21:19.481565 systemd-networkd[1329]: calia91edd9d389: Link UP Mar 7 01:21:19.481850 systemd-networkd[1329]: calia91edd9d389: Gained carrier Mar 7 01:21:19.543775 containerd[1696]: 2026-03-07 01:21:19.052 [ERROR][4234] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:21:19.543775 containerd[1696]: 2026-03-07 01:21:19.094 [INFO][4234] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--8271a56a8b-k8s-calico--apiserver--6b8468fddd--6rqlm-eth0 calico-apiserver-6b8468fddd- calico-system 0e08f188-ee09-49c4-9a65-26ca23d44a3a 857 0 2026-03-07 01:20:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b8468fddd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-8271a56a8b calico-apiserver-6b8468fddd-6rqlm eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calia91edd9d389 [] [] }} ContainerID="774af927cf2029f8511fd675d0b71663490a72fe752f9194eb64e00e6bfbee86" Namespace="calico-system" Pod="calico-apiserver-6b8468fddd-6rqlm" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-calico--apiserver--6b8468fddd--6rqlm-" Mar 7 01:21:19.543775 containerd[1696]: 2026-03-07 01:21:19.094 [INFO][4234] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="774af927cf2029f8511fd675d0b71663490a72fe752f9194eb64e00e6bfbee86" Namespace="calico-system" Pod="calico-apiserver-6b8468fddd-6rqlm" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-calico--apiserver--6b8468fddd--6rqlm-eth0" Mar 7 01:21:19.543775 containerd[1696]: 2026-03-07 01:21:19.192 [INFO][4293] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="774af927cf2029f8511fd675d0b71663490a72fe752f9194eb64e00e6bfbee86" HandleID="k8s-pod-network.774af927cf2029f8511fd675d0b71663490a72fe752f9194eb64e00e6bfbee86" Workload="ci--4081.3.6--n--8271a56a8b-k8s-calico--apiserver--6b8468fddd--6rqlm-eth0" Mar 7 01:21:19.543775 containerd[1696]: 2026-03-07 01:21:19.221 [INFO][4293] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="774af927cf2029f8511fd675d0b71663490a72fe752f9194eb64e00e6bfbee86" HandleID="k8s-pod-network.774af927cf2029f8511fd675d0b71663490a72fe752f9194eb64e00e6bfbee86" Workload="ci--4081.3.6--n--8271a56a8b-k8s-calico--apiserver--6b8468fddd--6rqlm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000374240), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-8271a56a8b", "pod":"calico-apiserver-6b8468fddd-6rqlm", "timestamp":"2026-03-07 01:21:19.192320607 +0000 UTC"}, Hostname:"ci-4081.3.6-n-8271a56a8b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001234a0)} Mar 7 01:21:19.543775 containerd[1696]: 2026-03-07 01:21:19.221 [INFO][4293] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:21:19.543775 containerd[1696]: 2026-03-07 01:21:19.337 [INFO][4293] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:21:19.543775 containerd[1696]: 2026-03-07 01:21:19.337 [INFO][4293] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-8271a56a8b' Mar 7 01:21:19.543775 containerd[1696]: 2026-03-07 01:21:19.339 [INFO][4293] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.774af927cf2029f8511fd675d0b71663490a72fe752f9194eb64e00e6bfbee86" host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:19.543775 containerd[1696]: 2026-03-07 01:21:19.346 [INFO][4293] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:19.543775 containerd[1696]: 2026-03-07 01:21:19.358 [INFO][4293] ipam/ipam.go 526: Trying affinity for 192.168.45.64/26 host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:19.543775 containerd[1696]: 2026-03-07 01:21:19.367 [INFO][4293] ipam/ipam.go 160: Attempting to load block cidr=192.168.45.64/26 host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:19.543775 containerd[1696]: 2026-03-07 01:21:19.374 [INFO][4293] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.45.64/26 host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:19.543775 containerd[1696]: 2026-03-07 01:21:19.374 [INFO][4293] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.45.64/26 handle="k8s-pod-network.774af927cf2029f8511fd675d0b71663490a72fe752f9194eb64e00e6bfbee86" host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:19.543775 containerd[1696]: 2026-03-07 01:21:19.402 [INFO][4293] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.774af927cf2029f8511fd675d0b71663490a72fe752f9194eb64e00e6bfbee86 Mar 7 01:21:19.543775 containerd[1696]: 2026-03-07 01:21:19.425 [INFO][4293] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.45.64/26 handle="k8s-pod-network.774af927cf2029f8511fd675d0b71663490a72fe752f9194eb64e00e6bfbee86" host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:19.543775 containerd[1696]: 2026-03-07 01:21:19.457 [INFO][4293] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.45.65/26] block=192.168.45.64/26 handle="k8s-pod-network.774af927cf2029f8511fd675d0b71663490a72fe752f9194eb64e00e6bfbee86" host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:19.543775 containerd[1696]: 2026-03-07 01:21:19.457 [INFO][4293] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.45.65/26] handle="k8s-pod-network.774af927cf2029f8511fd675d0b71663490a72fe752f9194eb64e00e6bfbee86" host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:19.543775 containerd[1696]: 2026-03-07 01:21:19.458 [INFO][4293] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:21:19.543775 containerd[1696]: 2026-03-07 01:21:19.458 [INFO][4293] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.45.65/26] IPv6=[] ContainerID="774af927cf2029f8511fd675d0b71663490a72fe752f9194eb64e00e6bfbee86" HandleID="k8s-pod-network.774af927cf2029f8511fd675d0b71663490a72fe752f9194eb64e00e6bfbee86" Workload="ci--4081.3.6--n--8271a56a8b-k8s-calico--apiserver--6b8468fddd--6rqlm-eth0" Mar 7 01:21:19.551014 containerd[1696]: 2026-03-07 01:21:19.464 [INFO][4234] cni-plugin/k8s.go 418: Populated endpoint ContainerID="774af927cf2029f8511fd675d0b71663490a72fe752f9194eb64e00e6bfbee86" Namespace="calico-system" Pod="calico-apiserver-6b8468fddd-6rqlm" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-calico--apiserver--6b8468fddd--6rqlm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8271a56a8b-k8s-calico--apiserver--6b8468fddd--6rqlm-eth0", GenerateName:"calico-apiserver-6b8468fddd-", Namespace:"calico-system", SelfLink:"", UID:"0e08f188-ee09-49c4-9a65-26ca23d44a3a", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 20, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b8468fddd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8271a56a8b", ContainerID:"", Pod:"calico-apiserver-6b8468fddd-6rqlm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.45.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia91edd9d389", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:21:19.551014 containerd[1696]: 2026-03-07 01:21:19.465 [INFO][4234] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.65/32] ContainerID="774af927cf2029f8511fd675d0b71663490a72fe752f9194eb64e00e6bfbee86" Namespace="calico-system" Pod="calico-apiserver-6b8468fddd-6rqlm" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-calico--apiserver--6b8468fddd--6rqlm-eth0" Mar 7 01:21:19.551014 containerd[1696]: 2026-03-07 01:21:19.465 [INFO][4234] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia91edd9d389 ContainerID="774af927cf2029f8511fd675d0b71663490a72fe752f9194eb64e00e6bfbee86" Namespace="calico-system" Pod="calico-apiserver-6b8468fddd-6rqlm" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-calico--apiserver--6b8468fddd--6rqlm-eth0" Mar 7 01:21:19.551014 containerd[1696]: 2026-03-07 01:21:19.481 [INFO][4234] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="774af927cf2029f8511fd675d0b71663490a72fe752f9194eb64e00e6bfbee86" Namespace="calico-system" Pod="calico-apiserver-6b8468fddd-6rqlm" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-calico--apiserver--6b8468fddd--6rqlm-eth0" Mar 7 01:21:19.551014 containerd[1696]: 2026-03-07 01:21:19.494 [INFO][4234] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="774af927cf2029f8511fd675d0b71663490a72fe752f9194eb64e00e6bfbee86" Namespace="calico-system" Pod="calico-apiserver-6b8468fddd-6rqlm" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-calico--apiserver--6b8468fddd--6rqlm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8271a56a8b-k8s-calico--apiserver--6b8468fddd--6rqlm-eth0", GenerateName:"calico-apiserver-6b8468fddd-", Namespace:"calico-system", SelfLink:"", UID:"0e08f188-ee09-49c4-9a65-26ca23d44a3a", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 20, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b8468fddd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8271a56a8b", ContainerID:"774af927cf2029f8511fd675d0b71663490a72fe752f9194eb64e00e6bfbee86", Pod:"calico-apiserver-6b8468fddd-6rqlm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.45.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia91edd9d389", MAC:"36:60:fe:ef:31:84", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:21:19.551014 containerd[1696]: 2026-03-07 01:21:19.536 [INFO][4234] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="774af927cf2029f8511fd675d0b71663490a72fe752f9194eb64e00e6bfbee86" Namespace="calico-system" Pod="calico-apiserver-6b8468fddd-6rqlm" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-calico--apiserver--6b8468fddd--6rqlm-eth0" Mar 7 01:21:19.614487 containerd[1696]: time="2026-03-07T01:21:19.612677534Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:21:19.614487 containerd[1696]: time="2026-03-07T01:21:19.612760935Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:21:19.614487 containerd[1696]: time="2026-03-07T01:21:19.612777535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:21:19.614487 containerd[1696]: time="2026-03-07T01:21:19.612881737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:21:19.713303 systemd[1]: Started cri-containerd-6853aeca27dc874d77fca6bfe46eb51796cbe4d9ab9adec93e72b6070d44d0c8.scope - libcontainer container 6853aeca27dc874d77fca6bfe46eb51796cbe4d9ab9adec93e72b6070d44d0c8. Mar 7 01:21:19.728837 systemd-networkd[1329]: calibffaed1c943: Link UP Mar 7 01:21:19.731348 systemd-networkd[1329]: calibffaed1c943: Gained carrier Mar 7 01:21:19.735401 containerd[1696]: time="2026-03-07T01:21:19.731678912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:21:19.735401 containerd[1696]: time="2026-03-07T01:21:19.731749713Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:21:19.735401 containerd[1696]: time="2026-03-07T01:21:19.731786314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:21:19.735401 containerd[1696]: time="2026-03-07T01:21:19.731924416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:21:19.791182 containerd[1696]: 2026-03-07 01:21:18.845 [ERROR][4175] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:21:19.791182 containerd[1696]: 2026-03-07 01:21:18.932 [INFO][4175] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--8271a56a8b-k8s-coredns--7d764666f9--6wzp5-eth0 coredns-7d764666f9- kube-system 93fc7b2a-d643-4053-92d9-634e00359c70 855 0 2026-03-07 01:20:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-8271a56a8b coredns-7d764666f9-6wzp5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibffaed1c943 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="685e986cba283cc6397ce14ae1c8118448e53a94504aa2956babf3f3bc61427c" Namespace="kube-system" Pod="coredns-7d764666f9-6wzp5" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-coredns--7d764666f9--6wzp5-" Mar 7 01:21:19.791182 containerd[1696]: 2026-03-07 01:21:18.932 [INFO][4175] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="685e986cba283cc6397ce14ae1c8118448e53a94504aa2956babf3f3bc61427c" Namespace="kube-system" Pod="coredns-7d764666f9-6wzp5" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-coredns--7d764666f9--6wzp5-eth0" Mar 7 01:21:19.791182 containerd[1696]: 2026-03-07 01:21:19.207 [INFO][4263] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="685e986cba283cc6397ce14ae1c8118448e53a94504aa2956babf3f3bc61427c" HandleID="k8s-pod-network.685e986cba283cc6397ce14ae1c8118448e53a94504aa2956babf3f3bc61427c" Workload="ci--4081.3.6--n--8271a56a8b-k8s-coredns--7d764666f9--6wzp5-eth0" Mar 7 01:21:19.791182 containerd[1696]: 2026-03-07 01:21:19.235 [INFO][4263] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="685e986cba283cc6397ce14ae1c8118448e53a94504aa2956babf3f3bc61427c" HandleID="k8s-pod-network.685e986cba283cc6397ce14ae1c8118448e53a94504aa2956babf3f3bc61427c" Workload="ci--4081.3.6--n--8271a56a8b-k8s-coredns--7d764666f9--6wzp5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000122950), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-8271a56a8b", "pod":"coredns-7d764666f9-6wzp5", "timestamp":"2026-03-07 01:21:19.207241217 +0000 UTC"}, Hostname:"ci-4081.3.6-n-8271a56a8b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000320580)} Mar 7 01:21:19.791182 containerd[1696]: 2026-03-07 01:21:19.236 [INFO][4263] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:21:19.791182 containerd[1696]: 2026-03-07 01:21:19.459 [INFO][4263] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:21:19.791182 containerd[1696]: 2026-03-07 01:21:19.459 [INFO][4263] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-8271a56a8b' Mar 7 01:21:19.791182 containerd[1696]: 2026-03-07 01:21:19.474 [INFO][4263] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.685e986cba283cc6397ce14ae1c8118448e53a94504aa2956babf3f3bc61427c" host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:19.791182 containerd[1696]: 2026-03-07 01:21:19.490 [INFO][4263] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:19.791182 containerd[1696]: 2026-03-07 01:21:19.587 [INFO][4263] ipam/ipam.go 526: Trying affinity for 192.168.45.64/26 host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:19.791182 containerd[1696]: 2026-03-07 01:21:19.594 [INFO][4263] ipam/ipam.go 160: Attempting to load block cidr=192.168.45.64/26 host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:19.791182 containerd[1696]: 2026-03-07 01:21:19.605 [INFO][4263] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.45.64/26 host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:19.791182 containerd[1696]: 2026-03-07 01:21:19.605 [INFO][4263] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.45.64/26 handle="k8s-pod-network.685e986cba283cc6397ce14ae1c8118448e53a94504aa2956babf3f3bc61427c" host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:19.791182 containerd[1696]: 2026-03-07 01:21:19.610 [INFO][4263] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.685e986cba283cc6397ce14ae1c8118448e53a94504aa2956babf3f3bc61427c Mar 7 01:21:19.791182 containerd[1696]: 2026-03-07 01:21:19.648 [INFO][4263] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.45.64/26 handle="k8s-pod-network.685e986cba283cc6397ce14ae1c8118448e53a94504aa2956babf3f3bc61427c" host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:19.791182 containerd[1696]: 2026-03-07 01:21:19.696 [INFO][4263] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.45.67/26] block=192.168.45.64/26 handle="k8s-pod-network.685e986cba283cc6397ce14ae1c8118448e53a94504aa2956babf3f3bc61427c" host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:19.791182 containerd[1696]: 2026-03-07 01:21:19.696 [INFO][4263] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.45.67/26] handle="k8s-pod-network.685e986cba283cc6397ce14ae1c8118448e53a94504aa2956babf3f3bc61427c" host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:19.791182 containerd[1696]: 2026-03-07 01:21:19.700 [INFO][4263] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:21:19.791182 containerd[1696]: 2026-03-07 01:21:19.700 [INFO][4263] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.45.67/26] IPv6=[] ContainerID="685e986cba283cc6397ce14ae1c8118448e53a94504aa2956babf3f3bc61427c" HandleID="k8s-pod-network.685e986cba283cc6397ce14ae1c8118448e53a94504aa2956babf3f3bc61427c" Workload="ci--4081.3.6--n--8271a56a8b-k8s-coredns--7d764666f9--6wzp5-eth0" Mar 7 01:21:19.792126 containerd[1696]: 2026-03-07 01:21:19.716 [INFO][4175] cni-plugin/k8s.go 418: Populated endpoint ContainerID="685e986cba283cc6397ce14ae1c8118448e53a94504aa2956babf3f3bc61427c" Namespace="kube-system" Pod="coredns-7d764666f9-6wzp5" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-coredns--7d764666f9--6wzp5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8271a56a8b-k8s-coredns--7d764666f9--6wzp5-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"93fc7b2a-d643-4053-92d9-634e00359c70", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 20, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8271a56a8b", ContainerID:"", Pod:"coredns-7d764666f9-6wzp5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibffaed1c943", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:21:19.792126 containerd[1696]: 2026-03-07 01:21:19.722 [INFO][4175] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.67/32] ContainerID="685e986cba283cc6397ce14ae1c8118448e53a94504aa2956babf3f3bc61427c" Namespace="kube-system" Pod="coredns-7d764666f9-6wzp5" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-coredns--7d764666f9--6wzp5-eth0" Mar 7 01:21:19.792126 containerd[1696]: 2026-03-07 01:21:19.722 [INFO][4175] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibffaed1c943 ContainerID="685e986cba283cc6397ce14ae1c8118448e53a94504aa2956babf3f3bc61427c" Namespace="kube-system" Pod="coredns-7d764666f9-6wzp5" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-coredns--7d764666f9--6wzp5-eth0" Mar 7 01:21:19.792126 containerd[1696]: 2026-03-07 01:21:19.734 [INFO][4175] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="685e986cba283cc6397ce14ae1c8118448e53a94504aa2956babf3f3bc61427c" Namespace="kube-system" Pod="coredns-7d764666f9-6wzp5" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-coredns--7d764666f9--6wzp5-eth0" Mar 7 01:21:19.792126 containerd[1696]: 2026-03-07 01:21:19.741 [INFO][4175] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="685e986cba283cc6397ce14ae1c8118448e53a94504aa2956babf3f3bc61427c" Namespace="kube-system" Pod="coredns-7d764666f9-6wzp5" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-coredns--7d764666f9--6wzp5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8271a56a8b-k8s-coredns--7d764666f9--6wzp5-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"93fc7b2a-d643-4053-92d9-634e00359c70", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 20, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8271a56a8b", ContainerID:"685e986cba283cc6397ce14ae1c8118448e53a94504aa2956babf3f3bc61427c", Pod:"coredns-7d764666f9-6wzp5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibffaed1c943", MAC:"5e:2f:05:40:0d:d6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:21:19.792487 containerd[1696]: 2026-03-07 01:21:19.788 [INFO][4175] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="685e986cba283cc6397ce14ae1c8118448e53a94504aa2956babf3f3bc61427c" Namespace="kube-system" Pod="coredns-7d764666f9-6wzp5" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-coredns--7d764666f9--6wzp5-eth0" Mar 7 01:21:19.792307 systemd[1]: Started cri-containerd-774af927cf2029f8511fd675d0b71663490a72fe752f9194eb64e00e6bfbee86.scope - libcontainer container 774af927cf2029f8511fd675d0b71663490a72fe752f9194eb64e00e6bfbee86. Mar 7 01:21:19.909454 containerd[1696]: time="2026-03-07T01:21:19.908956212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:21:19.909454 containerd[1696]: time="2026-03-07T01:21:19.909067213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:21:19.909454 containerd[1696]: time="2026-03-07T01:21:19.909136614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:21:19.909454 containerd[1696]: time="2026-03-07T01:21:19.909330417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:21:19.951189 systemd-networkd[1329]: cali12f819bc079: Link UP Mar 7 01:21:19.952988 systemd-networkd[1329]: cali12f819bc079: Gained carrier Mar 7 01:21:19.984595 systemd[1]: Started cri-containerd-685e986cba283cc6397ce14ae1c8118448e53a94504aa2956babf3f3bc61427c.scope - libcontainer container 685e986cba283cc6397ce14ae1c8118448e53a94504aa2956babf3f3bc61427c. Mar 7 01:21:19.994967 containerd[1696]: 2026-03-07 01:21:19.140 [ERROR][4250] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:21:19.994967 containerd[1696]: 2026-03-07 01:21:19.195 [INFO][4250] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--8271a56a8b-k8s-whisker--65d4c8b8c7--bfgrz-eth0 whisker-65d4c8b8c7- calico-system 9aeeed72-eae0-49fc-86f0-39c207a04179 875 0 2026-03-07 01:20:55 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:65d4c8b8c7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-8271a56a8b whisker-65d4c8b8c7-bfgrz eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali12f819bc079 [] [] }} ContainerID="86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" Namespace="calico-system" Pod="whisker-65d4c8b8c7-bfgrz" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-whisker--65d4c8b8c7--bfgrz-" Mar 7 01:21:19.994967 containerd[1696]: 2026-03-07 01:21:19.197 [INFO][4250] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" Namespace="calico-system" Pod="whisker-65d4c8b8c7-bfgrz" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-whisker--65d4c8b8c7--bfgrz-eth0" Mar 7 01:21:19.994967 containerd[1696]: 2026-03-07 01:21:19.269 [INFO][4312] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" HandleID="k8s-pod-network.86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" Workload="ci--4081.3.6--n--8271a56a8b-k8s-whisker--65d4c8b8c7--bfgrz-eth0" Mar 7 01:21:19.994967 containerd[1696]: 2026-03-07 01:21:19.287 [INFO][4312] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" HandleID="k8s-pod-network.86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" Workload="ci--4081.3.6--n--8271a56a8b-k8s-whisker--65d4c8b8c7--bfgrz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002774e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-8271a56a8b", "pod":"whisker-65d4c8b8c7-bfgrz", "timestamp":"2026-03-07 01:21:19.269761599 +0000 UTC"}, Hostname:"ci-4081.3.6-n-8271a56a8b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000204f20)} Mar 7 01:21:19.994967 containerd[1696]: 2026-03-07 01:21:19.287 [INFO][4312] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:21:19.994967 containerd[1696]: 2026-03-07 01:21:19.701 [INFO][4312] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:21:19.994967 containerd[1696]: 2026-03-07 01:21:19.701 [INFO][4312] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-8271a56a8b' Mar 7 01:21:19.994967 containerd[1696]: 2026-03-07 01:21:19.718 [INFO][4312] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:19.994967 containerd[1696]: 2026-03-07 01:21:19.778 [INFO][4312] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:19.994967 containerd[1696]: 2026-03-07 01:21:19.814 [INFO][4312] ipam/ipam.go 526: Trying affinity for 192.168.45.64/26 host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:19.994967 containerd[1696]: 2026-03-07 01:21:19.823 [INFO][4312] ipam/ipam.go 160: Attempting to load block cidr=192.168.45.64/26 host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:19.994967 containerd[1696]: 2026-03-07 01:21:19.836 [INFO][4312] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.45.64/26 host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:19.994967 containerd[1696]: 2026-03-07 01:21:19.836 [INFO][4312] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.45.64/26 handle="k8s-pod-network.86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:19.994967 containerd[1696]: 2026-03-07 01:21:19.843 [INFO][4312] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e Mar 7 01:21:19.994967 containerd[1696]: 2026-03-07 01:21:19.862 [INFO][4312] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.45.64/26 handle="k8s-pod-network.86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:19.994967 containerd[1696]: 2026-03-07 01:21:19.916 [INFO][4312] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.45.68/26] block=192.168.45.64/26 handle="k8s-pod-network.86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:19.994967 containerd[1696]: 2026-03-07 01:21:19.921 [INFO][4312] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.45.68/26] handle="k8s-pod-network.86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:19.994967 containerd[1696]: 2026-03-07 01:21:19.921 [INFO][4312] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:21:19.994967 containerd[1696]: 2026-03-07 01:21:19.923 [INFO][4312] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.45.68/26] IPv6=[] ContainerID="86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" HandleID="k8s-pod-network.86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" Workload="ci--4081.3.6--n--8271a56a8b-k8s-whisker--65d4c8b8c7--bfgrz-eth0" Mar 7 01:21:19.995931 containerd[1696]: 2026-03-07 01:21:19.937 [INFO][4250] cni-plugin/k8s.go 418: Populated endpoint ContainerID="86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" Namespace="calico-system" Pod="whisker-65d4c8b8c7-bfgrz" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-whisker--65d4c8b8c7--bfgrz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8271a56a8b-k8s-whisker--65d4c8b8c7--bfgrz-eth0", GenerateName:"whisker-65d4c8b8c7-", Namespace:"calico-system", SelfLink:"", UID:"9aeeed72-eae0-49fc-86f0-39c207a04179", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 20, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"65d4c8b8c7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8271a56a8b", ContainerID:"", Pod:"whisker-65d4c8b8c7-bfgrz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.45.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali12f819bc079", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:21:19.995931 containerd[1696]: 2026-03-07 01:21:19.937 [INFO][4250] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.68/32] ContainerID="86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" Namespace="calico-system" Pod="whisker-65d4c8b8c7-bfgrz" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-whisker--65d4c8b8c7--bfgrz-eth0" Mar 7 01:21:19.995931 containerd[1696]: 2026-03-07 01:21:19.937 [INFO][4250] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali12f819bc079 ContainerID="86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" Namespace="calico-system" Pod="whisker-65d4c8b8c7-bfgrz" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-whisker--65d4c8b8c7--bfgrz-eth0" Mar 7 01:21:19.995931 containerd[1696]: 2026-03-07 01:21:19.959 [INFO][4250] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" Namespace="calico-system" Pod="whisker-65d4c8b8c7-bfgrz" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-whisker--65d4c8b8c7--bfgrz-eth0" Mar 7 01:21:19.995931 containerd[1696]: 2026-03-07 01:21:19.961 [INFO][4250] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" Namespace="calico-system" Pod="whisker-65d4c8b8c7-bfgrz" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-whisker--65d4c8b8c7--bfgrz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8271a56a8b-k8s-whisker--65d4c8b8c7--bfgrz-eth0", GenerateName:"whisker-65d4c8b8c7-", Namespace:"calico-system", SelfLink:"", UID:"9aeeed72-eae0-49fc-86f0-39c207a04179", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 20, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"65d4c8b8c7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8271a56a8b", ContainerID:"86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e", Pod:"whisker-65d4c8b8c7-bfgrz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.45.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali12f819bc079", MAC:"6e:29:d4:9d:4f:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:21:19.995931 containerd[1696]: 2026-03-07 01:21:19.993 [INFO][4250] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" Namespace="calico-system" Pod="whisker-65d4c8b8c7-bfgrz" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-whisker--65d4c8b8c7--bfgrz-eth0" Mar 7 01:21:20.038220 containerd[1696]: time="2026-03-07T01:21:20.036116805Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:21:20.038220 containerd[1696]: time="2026-03-07T01:21:20.036178906Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:21:20.038220 containerd[1696]: time="2026-03-07T01:21:20.036214006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:21:20.038220 containerd[1696]: time="2026-03-07T01:21:20.036337908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:21:20.089310 systemd[1]: Started cri-containerd-86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e.scope - libcontainer container 86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e. Mar 7 01:21:20.095920 systemd-networkd[1329]: cali848eae0ca76: Link UP Mar 7 01:21:20.096215 systemd-networkd[1329]: cali848eae0ca76: Gained carrier Mar 7 01:21:20.136177 containerd[1696]: 2026-03-07 01:21:18.967 [ERROR][4198] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:21:20.136177 containerd[1696]: 2026-03-07 01:21:19.003 [INFO][4198] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--8271a56a8b-k8s-calico--kube--controllers--8685cf67d4--nwltl-eth0 calico-kube-controllers-8685cf67d4- calico-system 91b8f873-2f68-41cb-bf26-c02fe2af4478 859 0 2026-03-07 01:20:52 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8685cf67d4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-8271a56a8b calico-kube-controllers-8685cf67d4-nwltl eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali848eae0ca76 [] [] }} ContainerID="c0dfe006060fd3916dd5a4904928b7c43a2799607dd9a0858bdec60d737595c1" Namespace="calico-system" Pod="calico-kube-controllers-8685cf67d4-nwltl" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-calico--kube--controllers--8685cf67d4--nwltl-" Mar 7 01:21:20.136177 containerd[1696]: 2026-03-07 01:21:19.003 [INFO][4198] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c0dfe006060fd3916dd5a4904928b7c43a2799607dd9a0858bdec60d737595c1" Namespace="calico-system" Pod="calico-kube-controllers-8685cf67d4-nwltl" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-calico--kube--controllers--8685cf67d4--nwltl-eth0" Mar 7 01:21:20.136177 containerd[1696]: 2026-03-07 01:21:19.277 [INFO][4274] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c0dfe006060fd3916dd5a4904928b7c43a2799607dd9a0858bdec60d737595c1" HandleID="k8s-pod-network.c0dfe006060fd3916dd5a4904928b7c43a2799607dd9a0858bdec60d737595c1" Workload="ci--4081.3.6--n--8271a56a8b-k8s-calico--kube--controllers--8685cf67d4--nwltl-eth0" Mar 7 01:21:20.136177 containerd[1696]: 2026-03-07 01:21:19.289 [INFO][4274] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c0dfe006060fd3916dd5a4904928b7c43a2799607dd9a0858bdec60d737595c1" HandleID="k8s-pod-network.c0dfe006060fd3916dd5a4904928b7c43a2799607dd9a0858bdec60d737595c1" Workload="ci--4081.3.6--n--8271a56a8b-k8s-calico--kube--controllers--8685cf67d4--nwltl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a68f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-8271a56a8b", "pod":"calico-kube-controllers-8685cf67d4-nwltl", "timestamp":"2026-03-07 01:21:19.277402706 +0000 UTC"}, Hostname:"ci-4081.3.6-n-8271a56a8b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000188f20)} Mar 7 01:21:20.136177 containerd[1696]: 2026-03-07 01:21:19.289 [INFO][4274] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:21:20.136177 containerd[1696]: 2026-03-07 01:21:19.922 [INFO][4274] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:21:20.136177 containerd[1696]: 2026-03-07 01:21:19.922 [INFO][4274] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-8271a56a8b' Mar 7 01:21:20.136177 containerd[1696]: 2026-03-07 01:21:19.934 [INFO][4274] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c0dfe006060fd3916dd5a4904928b7c43a2799607dd9a0858bdec60d737595c1" host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:20.136177 containerd[1696]: 2026-03-07 01:21:19.969 [INFO][4274] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:20.136177 containerd[1696]: 2026-03-07 01:21:20.001 [INFO][4274] ipam/ipam.go 526: Trying affinity for 192.168.45.64/26 host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:20.136177 containerd[1696]: 2026-03-07 01:21:20.009 [INFO][4274] ipam/ipam.go 160: Attempting to load block cidr=192.168.45.64/26 host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:20.136177 containerd[1696]: 2026-03-07 01:21:20.035 [INFO][4274] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.45.64/26 host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:20.136177 containerd[1696]: 2026-03-07 01:21:20.035 [INFO][4274] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.45.64/26 handle="k8s-pod-network.c0dfe006060fd3916dd5a4904928b7c43a2799607dd9a0858bdec60d737595c1" host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:20.136177 containerd[1696]: 2026-03-07 01:21:20.042 [INFO][4274] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c0dfe006060fd3916dd5a4904928b7c43a2799607dd9a0858bdec60d737595c1 Mar 7 01:21:20.136177 containerd[1696]: 2026-03-07 01:21:20.066 [INFO][4274] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.45.64/26 handle="k8s-pod-network.c0dfe006060fd3916dd5a4904928b7c43a2799607dd9a0858bdec60d737595c1" host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:20.136177 containerd[1696]: 2026-03-07 01:21:20.080 [INFO][4274] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.45.69/26] block=192.168.45.64/26 handle="k8s-pod-network.c0dfe006060fd3916dd5a4904928b7c43a2799607dd9a0858bdec60d737595c1" host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:20.136177 containerd[1696]: 2026-03-07 01:21:20.081 [INFO][4274] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.45.69/26] handle="k8s-pod-network.c0dfe006060fd3916dd5a4904928b7c43a2799607dd9a0858bdec60d737595c1" host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:20.136177 containerd[1696]: 2026-03-07 01:21:20.081 [INFO][4274] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:21:20.136177 containerd[1696]: 2026-03-07 01:21:20.081 [INFO][4274] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.45.69/26] IPv6=[] ContainerID="c0dfe006060fd3916dd5a4904928b7c43a2799607dd9a0858bdec60d737595c1" HandleID="k8s-pod-network.c0dfe006060fd3916dd5a4904928b7c43a2799607dd9a0858bdec60d737595c1" Workload="ci--4081.3.6--n--8271a56a8b-k8s-calico--kube--controllers--8685cf67d4--nwltl-eth0" Mar 7 01:21:20.139443 containerd[1696]: 2026-03-07 01:21:20.088 [INFO][4198] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c0dfe006060fd3916dd5a4904928b7c43a2799607dd9a0858bdec60d737595c1" Namespace="calico-system" Pod="calico-kube-controllers-8685cf67d4-nwltl" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-calico--kube--controllers--8685cf67d4--nwltl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8271a56a8b-k8s-calico--kube--controllers--8685cf67d4--nwltl-eth0", GenerateName:"calico-kube-controllers-8685cf67d4-", Namespace:"calico-system", SelfLink:"", UID:"91b8f873-2f68-41cb-bf26-c02fe2af4478", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 20, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8685cf67d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8271a56a8b", ContainerID:"", Pod:"calico-kube-controllers-8685cf67d4-nwltl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.45.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali848eae0ca76", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:21:20.139443 containerd[1696]: 2026-03-07 01:21:20.088 [INFO][4198] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.69/32] ContainerID="c0dfe006060fd3916dd5a4904928b7c43a2799607dd9a0858bdec60d737595c1" Namespace="calico-system" Pod="calico-kube-controllers-8685cf67d4-nwltl" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-calico--kube--controllers--8685cf67d4--nwltl-eth0" Mar 7 01:21:20.139443 containerd[1696]: 2026-03-07 01:21:20.088 [INFO][4198] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali848eae0ca76 ContainerID="c0dfe006060fd3916dd5a4904928b7c43a2799607dd9a0858bdec60d737595c1" Namespace="calico-system" Pod="calico-kube-controllers-8685cf67d4-nwltl" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-calico--kube--controllers--8685cf67d4--nwltl-eth0" Mar 7 01:21:20.139443 containerd[1696]: 2026-03-07 01:21:20.095 [INFO][4198] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c0dfe006060fd3916dd5a4904928b7c43a2799607dd9a0858bdec60d737595c1" Namespace="calico-system" Pod="calico-kube-controllers-8685cf67d4-nwltl" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-calico--kube--controllers--8685cf67d4--nwltl-eth0" Mar 7 01:21:20.139443 containerd[1696]: 2026-03-07 01:21:20.098 [INFO][4198] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c0dfe006060fd3916dd5a4904928b7c43a2799607dd9a0858bdec60d737595c1" Namespace="calico-system" Pod="calico-kube-controllers-8685cf67d4-nwltl" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-calico--kube--controllers--8685cf67d4--nwltl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8271a56a8b-k8s-calico--kube--controllers--8685cf67d4--nwltl-eth0", GenerateName:"calico-kube-controllers-8685cf67d4-", Namespace:"calico-system", SelfLink:"", UID:"91b8f873-2f68-41cb-bf26-c02fe2af4478", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 20, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8685cf67d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8271a56a8b", ContainerID:"c0dfe006060fd3916dd5a4904928b7c43a2799607dd9a0858bdec60d737595c1", Pod:"calico-kube-controllers-8685cf67d4-nwltl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.45.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali848eae0ca76", MAC:"62:77:85:5c:a0:49", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:21:20.139443 containerd[1696]: 2026-03-07 01:21:20.125 [INFO][4198] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c0dfe006060fd3916dd5a4904928b7c43a2799607dd9a0858bdec60d737595c1" Namespace="calico-system" Pod="calico-kube-controllers-8685cf67d4-nwltl" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-calico--kube--controllers--8685cf67d4--nwltl-eth0" Mar 7 01:21:20.184237 containerd[1696]: time="2026-03-07T01:21:20.182342367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-6wzp5,Uid:93fc7b2a-d643-4053-92d9-634e00359c70,Namespace:kube-system,Attempt:0,} returns sandbox id \"685e986cba283cc6397ce14ae1c8118448e53a94504aa2956babf3f3bc61427c\"" Mar 7 01:21:20.220448 containerd[1696]: time="2026-03-07T01:21:20.220318102Z" level=info msg="CreateContainer within sandbox \"685e986cba283cc6397ce14ae1c8118448e53a94504aa2956babf3f3bc61427c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:21:20.244433 systemd-networkd[1329]: calid4d0dc2a4cb: Link UP Mar 7 01:21:20.253317 systemd-networkd[1329]: calid4d0dc2a4cb: Gained carrier Mar 7 01:21:20.266267 containerd[1696]: 2026-03-07 01:21:19.768 [INFO][4356] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e" Mar 7 01:21:20.266267 containerd[1696]: 2026-03-07 01:21:19.769 [INFO][4356] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e" iface="eth0" netns="/var/run/netns/cni-41157b5e-325e-a03d-6fa0-90691b9d1dc6" Mar 7 01:21:20.266267 containerd[1696]: 2026-03-07 01:21:19.769 [INFO][4356] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e" iface="eth0" netns="/var/run/netns/cni-41157b5e-325e-a03d-6fa0-90691b9d1dc6" Mar 7 01:21:20.266267 containerd[1696]: 2026-03-07 01:21:19.769 [INFO][4356] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e" iface="eth0" netns="/var/run/netns/cni-41157b5e-325e-a03d-6fa0-90691b9d1dc6" Mar 7 01:21:20.266267 containerd[1696]: 2026-03-07 01:21:19.769 [INFO][4356] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e" Mar 7 01:21:20.266267 containerd[1696]: 2026-03-07 01:21:19.769 [INFO][4356] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e" Mar 7 01:21:20.266267 containerd[1696]: 2026-03-07 01:21:19.897 [INFO][4456] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e" HandleID="k8s-pod-network.faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e" Workload="ci--4081.3.6--n--8271a56a8b-k8s-csi--node--driver--l6gbz-eth0" Mar 7 01:21:20.266267 containerd[1696]: 2026-03-07 01:21:19.899 [INFO][4456] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:21:20.266267 containerd[1696]: 2026-03-07 01:21:20.216 [INFO][4456] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:21:20.266267 containerd[1696]: 2026-03-07 01:21:20.228 [WARNING][4456] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e" HandleID="k8s-pod-network.faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e" Workload="ci--4081.3.6--n--8271a56a8b-k8s-csi--node--driver--l6gbz-eth0" Mar 7 01:21:20.266267 containerd[1696]: 2026-03-07 01:21:20.228 [INFO][4456] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e" HandleID="k8s-pod-network.faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e" Workload="ci--4081.3.6--n--8271a56a8b-k8s-csi--node--driver--l6gbz-eth0" Mar 7 01:21:20.266267 containerd[1696]: 2026-03-07 01:21:20.232 [INFO][4456] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:21:20.266267 containerd[1696]: 2026-03-07 01:21:20.255 [INFO][4356] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e" Mar 7 01:21:20.266917 containerd[1696]: time="2026-03-07T01:21:20.266404352Z" level=info msg="TearDown network for sandbox \"faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e\" successfully" Mar 7 01:21:20.266917 containerd[1696]: time="2026-03-07T01:21:20.266466953Z" level=info msg="StopPodSandbox for \"faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e\" returns successfully" Mar 7 01:21:20.272831 containerd[1696]: time="2026-03-07T01:21:20.272401737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b8468fddd-6rqlm,Uid:0e08f188-ee09-49c4-9a65-26ca23d44a3a,Namespace:calico-system,Attempt:0,} returns sandbox id \"774af927cf2029f8511fd675d0b71663490a72fe752f9194eb64e00e6bfbee86\"" Mar 7 01:21:20.277685 containerd[1696]: time="2026-03-07T01:21:20.277377307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l6gbz,Uid:7ade0512-4109-41d7-94b7-3d8c1b62c155,Namespace:calico-system,Attempt:1,}" Mar 7 01:21:20.281328 containerd[1696]: time="2026-03-07T01:21:20.280848556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b8468fddd-sh56r,Uid:50aad275-e5a5-4169-bc0a-89669a369738,Namespace:calico-system,Attempt:0,} returns sandbox id \"6853aeca27dc874d77fca6bfe46eb51796cbe4d9ab9adec93e72b6070d44d0c8\"" Mar 7 01:21:20.290346 containerd[1696]: time="2026-03-07T01:21:20.290312189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 01:21:20.296080 containerd[1696]: time="2026-03-07T01:21:20.293627536Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:21:20.296080 containerd[1696]: time="2026-03-07T01:21:20.293711337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:21:20.296080 containerd[1696]: time="2026-03-07T01:21:20.293753838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:21:20.296080 containerd[1696]: time="2026-03-07T01:21:20.295804767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:21:20.298983 containerd[1696]: 2026-03-07 01:21:19.030 [ERROR][4209] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:21:20.298983 containerd[1696]: 2026-03-07 01:21:19.080 [INFO][4209] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--8271a56a8b-k8s-goldmane--9f7667bb8--5bpvw-eth0 goldmane-9f7667bb8- calico-system c7e196a9-b819-4eb7-9c05-9cc149a1f20e 864 0 2026-03-07 01:20:51 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-8271a56a8b goldmane-9f7667bb8-5bpvw eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calid4d0dc2a4cb [] [] }} ContainerID="29dfd50fc1635fc37af441f1cd31851aa8515139491996909b540e70dfbd4b3a" Namespace="calico-system" Pod="goldmane-9f7667bb8-5bpvw" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-goldmane--9f7667bb8--5bpvw-" Mar 7 01:21:20.298983 containerd[1696]: 2026-03-07 01:21:19.080 [INFO][4209] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="29dfd50fc1635fc37af441f1cd31851aa8515139491996909b540e70dfbd4b3a" Namespace="calico-system" Pod="goldmane-9f7667bb8-5bpvw" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-goldmane--9f7667bb8--5bpvw-eth0" Mar 7 01:21:20.298983 containerd[1696]: 2026-03-07 01:21:19.293 [INFO][4288] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="29dfd50fc1635fc37af441f1cd31851aa8515139491996909b540e70dfbd4b3a" HandleID="k8s-pod-network.29dfd50fc1635fc37af441f1cd31851aa8515139491996909b540e70dfbd4b3a" Workload="ci--4081.3.6--n--8271a56a8b-k8s-goldmane--9f7667bb8--5bpvw-eth0" Mar 7 01:21:20.298983 containerd[1696]: 2026-03-07 01:21:19.309 [INFO][4288] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="29dfd50fc1635fc37af441f1cd31851aa8515139491996909b540e70dfbd4b3a" HandleID="k8s-pod-network.29dfd50fc1635fc37af441f1cd31851aa8515139491996909b540e70dfbd4b3a" Workload="ci--4081.3.6--n--8271a56a8b-k8s-goldmane--9f7667bb8--5bpvw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005e8350), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-8271a56a8b", "pod":"goldmane-9f7667bb8-5bpvw", "timestamp":"2026-03-07 01:21:19.293176729 +0000 UTC"}, Hostname:"ci-4081.3.6-n-8271a56a8b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000652000)} Mar 7 01:21:20.298983 containerd[1696]: 2026-03-07 01:21:19.309 [INFO][4288] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:21:20.298983 containerd[1696]: 2026-03-07 01:21:20.081 [INFO][4288] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:21:20.298983 containerd[1696]: 2026-03-07 01:21:20.081 [INFO][4288] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-8271a56a8b' Mar 7 01:21:20.298983 containerd[1696]: 2026-03-07 01:21:20.087 [INFO][4288] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.29dfd50fc1635fc37af441f1cd31851aa8515139491996909b540e70dfbd4b3a" host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:20.298983 containerd[1696]: 2026-03-07 01:21:20.110 [INFO][4288] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:20.298983 containerd[1696]: 2026-03-07 01:21:20.131 [INFO][4288] ipam/ipam.go 526: Trying affinity for 192.168.45.64/26 host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:20.298983 containerd[1696]: 2026-03-07 01:21:20.135 [INFO][4288] ipam/ipam.go 160: Attempting to load block cidr=192.168.45.64/26 host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:20.298983 containerd[1696]: 2026-03-07 01:21:20.146 [INFO][4288] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.45.64/26 host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:20.298983 containerd[1696]: 2026-03-07 01:21:20.146 [INFO][4288] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.45.64/26 handle="k8s-pod-network.29dfd50fc1635fc37af441f1cd31851aa8515139491996909b540e70dfbd4b3a" host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:20.298983 containerd[1696]: 2026-03-07 01:21:20.154 [INFO][4288] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.29dfd50fc1635fc37af441f1cd31851aa8515139491996909b540e70dfbd4b3a Mar 7 01:21:20.298983 containerd[1696]: 2026-03-07 01:21:20.192 [INFO][4288] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.45.64/26 handle="k8s-pod-network.29dfd50fc1635fc37af441f1cd31851aa8515139491996909b540e70dfbd4b3a" host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:20.298983 containerd[1696]: 2026-03-07 01:21:20.215 [INFO][4288] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.45.70/26] block=192.168.45.64/26 handle="k8s-pod-network.29dfd50fc1635fc37af441f1cd31851aa8515139491996909b540e70dfbd4b3a" host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:20.298983 containerd[1696]: 2026-03-07 01:21:20.216 [INFO][4288] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.45.70/26] handle="k8s-pod-network.29dfd50fc1635fc37af441f1cd31851aa8515139491996909b540e70dfbd4b3a" host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:20.298983 containerd[1696]: 2026-03-07 01:21:20.216 [INFO][4288] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:21:20.298983 containerd[1696]: 2026-03-07 01:21:20.216 [INFO][4288] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.45.70/26] IPv6=[] ContainerID="29dfd50fc1635fc37af441f1cd31851aa8515139491996909b540e70dfbd4b3a" HandleID="k8s-pod-network.29dfd50fc1635fc37af441f1cd31851aa8515139491996909b540e70dfbd4b3a" Workload="ci--4081.3.6--n--8271a56a8b-k8s-goldmane--9f7667bb8--5bpvw-eth0" Mar 7 01:21:20.302933 containerd[1696]: 2026-03-07 01:21:20.224 [INFO][4209] cni-plugin/k8s.go 418: Populated endpoint ContainerID="29dfd50fc1635fc37af441f1cd31851aa8515139491996909b540e70dfbd4b3a" Namespace="calico-system" Pod="goldmane-9f7667bb8-5bpvw" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-goldmane--9f7667bb8--5bpvw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8271a56a8b-k8s-goldmane--9f7667bb8--5bpvw-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"c7e196a9-b819-4eb7-9c05-9cc149a1f20e", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 20, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8271a56a8b", ContainerID:"", Pod:"goldmane-9f7667bb8-5bpvw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.45.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid4d0dc2a4cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:21:20.302933 containerd[1696]: 2026-03-07 01:21:20.225 [INFO][4209] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.70/32] ContainerID="29dfd50fc1635fc37af441f1cd31851aa8515139491996909b540e70dfbd4b3a" Namespace="calico-system" Pod="goldmane-9f7667bb8-5bpvw" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-goldmane--9f7667bb8--5bpvw-eth0" Mar 7 01:21:20.302933 containerd[1696]: 2026-03-07 01:21:20.225 [INFO][4209] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid4d0dc2a4cb ContainerID="29dfd50fc1635fc37af441f1cd31851aa8515139491996909b540e70dfbd4b3a" Namespace="calico-system" Pod="goldmane-9f7667bb8-5bpvw" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-goldmane--9f7667bb8--5bpvw-eth0" Mar 7 01:21:20.302933 containerd[1696]: 2026-03-07 01:21:20.259 [INFO][4209] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="29dfd50fc1635fc37af441f1cd31851aa8515139491996909b540e70dfbd4b3a" Namespace="calico-system" Pod="goldmane-9f7667bb8-5bpvw" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-goldmane--9f7667bb8--5bpvw-eth0" Mar 7 01:21:20.302933 containerd[1696]: 2026-03-07 01:21:20.261 [INFO][4209] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="29dfd50fc1635fc37af441f1cd31851aa8515139491996909b540e70dfbd4b3a" Namespace="calico-system" Pod="goldmane-9f7667bb8-5bpvw" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-goldmane--9f7667bb8--5bpvw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8271a56a8b-k8s-goldmane--9f7667bb8--5bpvw-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"c7e196a9-b819-4eb7-9c05-9cc149a1f20e", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 20, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8271a56a8b", ContainerID:"29dfd50fc1635fc37af441f1cd31851aa8515139491996909b540e70dfbd4b3a", Pod:"goldmane-9f7667bb8-5bpvw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.45.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid4d0dc2a4cb", MAC:"b6:03:de:3d:a2:b5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:21:20.302933 containerd[1696]: 2026-03-07 01:21:20.290 [INFO][4209] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="29dfd50fc1635fc37af441f1cd31851aa8515139491996909b540e70dfbd4b3a" Namespace="calico-system" Pod="goldmane-9f7667bb8-5bpvw" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-goldmane--9f7667bb8--5bpvw-eth0" Mar 7 01:21:20.339073 containerd[1696]: time="2026-03-07T01:21:20.339016176Z" level=info msg="CreateContainer within sandbox \"685e986cba283cc6397ce14ae1c8118448e53a94504aa2956babf3f3bc61427c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"41ca4f3a9fc31ebf280e30cefeed3cadbebd6b55b5037b84602c20e9d369318f\"" Mar 7 01:21:20.341191 containerd[1696]: time="2026-03-07T01:21:20.340302894Z" level=info msg="StartContainer for \"41ca4f3a9fc31ebf280e30cefeed3cadbebd6b55b5037b84602c20e9d369318f\"" Mar 7 01:21:20.365717 systemd-networkd[1329]: caliefaaa934eb2: Link UP Mar 7 01:21:20.366475 systemd-networkd[1329]: caliefaaa934eb2: Gained carrier Mar 7 01:21:20.367354 systemd[1]: Started cri-containerd-c0dfe006060fd3916dd5a4904928b7c43a2799607dd9a0858bdec60d737595c1.scope - libcontainer container c0dfe006060fd3916dd5a4904928b7c43a2799607dd9a0858bdec60d737595c1. Mar 7 01:21:20.405775 systemd[1]: run-containerd-runc-k8s.io-6853aeca27dc874d77fca6bfe46eb51796cbe4d9ab9adec93e72b6070d44d0c8-runc.UXikSx.mount: Deactivated successfully. Mar 7 01:21:20.405923 systemd[1]: run-netns-cni\x2d41157b5e\x2d325e\x2da03d\x2d6fa0\x2d90691b9d1dc6.mount: Deactivated successfully. Mar 7 01:21:20.414872 containerd[1696]: time="2026-03-07T01:21:20.414791345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65d4c8b8c7-bfgrz,Uid:9aeeed72-eae0-49fc-86f0-39c207a04179,Namespace:calico-system,Attempt:0,} returns sandbox id \"86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e\"" Mar 7 01:21:20.430220 containerd[1696]: 2026-03-07 01:21:19.715 [ERROR][4364] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:21:20.430220 containerd[1696]: 2026-03-07 01:21:19.776 [INFO][4364] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--8271a56a8b-k8s-coredns--7d764666f9--zrdvd-eth0 coredns-7d764666f9- kube-system 20284904-5e50-41d2-bb96-a70478b6f953 879 0 2026-03-07 01:20:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-8271a56a8b coredns-7d764666f9-zrdvd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliefaaa934eb2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="ba002ea2e935c9d6f5063900e5a1409434dc2a3b95d7e58a1fe61e771a6258e8" Namespace="kube-system" Pod="coredns-7d764666f9-zrdvd" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-coredns--7d764666f9--zrdvd-" Mar 7 01:21:20.430220 containerd[1696]: 2026-03-07 01:21:19.776 [INFO][4364] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ba002ea2e935c9d6f5063900e5a1409434dc2a3b95d7e58a1fe61e771a6258e8" Namespace="kube-system" Pod="coredns-7d764666f9-zrdvd" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-coredns--7d764666f9--zrdvd-eth0" Mar 7 01:21:20.430220 containerd[1696]: 2026-03-07 01:21:19.861 [INFO][4470] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ba002ea2e935c9d6f5063900e5a1409434dc2a3b95d7e58a1fe61e771a6258e8" HandleID="k8s-pod-network.ba002ea2e935c9d6f5063900e5a1409434dc2a3b95d7e58a1fe61e771a6258e8" Workload="ci--4081.3.6--n--8271a56a8b-k8s-coredns--7d764666f9--zrdvd-eth0" Mar 7 01:21:20.430220 containerd[1696]: 2026-03-07 01:21:19.906 [INFO][4470] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ba002ea2e935c9d6f5063900e5a1409434dc2a3b95d7e58a1fe61e771a6258e8" HandleID="k8s-pod-network.ba002ea2e935c9d6f5063900e5a1409434dc2a3b95d7e58a1fe61e771a6258e8" Workload="ci--4081.3.6--n--8271a56a8b-k8s-coredns--7d764666f9--zrdvd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000377210), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-8271a56a8b", "pod":"coredns-7d764666f9-zrdvd", "timestamp":"2026-03-07 01:21:19.861765646 +0000 UTC"}, Hostname:"ci-4081.3.6-n-8271a56a8b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000452dc0)} Mar 7 01:21:20.430220 containerd[1696]: 2026-03-07 01:21:19.906 [INFO][4470] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:21:20.430220 containerd[1696]: 2026-03-07 01:21:20.233 [INFO][4470] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:21:20.430220 containerd[1696]: 2026-03-07 01:21:20.233 [INFO][4470] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-8271a56a8b' Mar 7 01:21:20.430220 containerd[1696]: 2026-03-07 01:21:20.241 [INFO][4470] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ba002ea2e935c9d6f5063900e5a1409434dc2a3b95d7e58a1fe61e771a6258e8" host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:20.430220 containerd[1696]: 2026-03-07 01:21:20.264 [INFO][4470] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:20.430220 containerd[1696]: 2026-03-07 01:21:20.292 [INFO][4470] ipam/ipam.go 526: Trying affinity for 192.168.45.64/26 host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:20.430220 containerd[1696]: 2026-03-07 01:21:20.299 [INFO][4470] ipam/ipam.go 160: Attempting to load block cidr=192.168.45.64/26 host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:20.430220 containerd[1696]: 2026-03-07 01:21:20.306 [INFO][4470] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.45.64/26 host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:20.430220 containerd[1696]: 2026-03-07 01:21:20.307 [INFO][4470] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.45.64/26 handle="k8s-pod-network.ba002ea2e935c9d6f5063900e5a1409434dc2a3b95d7e58a1fe61e771a6258e8" host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:20.430220 containerd[1696]: 2026-03-07 01:21:20.310 [INFO][4470] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ba002ea2e935c9d6f5063900e5a1409434dc2a3b95d7e58a1fe61e771a6258e8 Mar 7 01:21:20.430220 containerd[1696]: 2026-03-07 01:21:20.329 [INFO][4470] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.45.64/26 handle="k8s-pod-network.ba002ea2e935c9d6f5063900e5a1409434dc2a3b95d7e58a1fe61e771a6258e8" host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:20.430220 containerd[1696]: 2026-03-07 01:21:20.348 [INFO][4470] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.45.71/26] block=192.168.45.64/26 handle="k8s-pod-network.ba002ea2e935c9d6f5063900e5a1409434dc2a3b95d7e58a1fe61e771a6258e8" host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:20.430220 containerd[1696]: 2026-03-07 01:21:20.348 [INFO][4470] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.45.71/26] handle="k8s-pod-network.ba002ea2e935c9d6f5063900e5a1409434dc2a3b95d7e58a1fe61e771a6258e8" host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:20.430220 containerd[1696]: 2026-03-07 01:21:20.348 [INFO][4470] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:21:20.430220 containerd[1696]: 2026-03-07 01:21:20.348 [INFO][4470] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.45.71/26] IPv6=[] ContainerID="ba002ea2e935c9d6f5063900e5a1409434dc2a3b95d7e58a1fe61e771a6258e8" HandleID="k8s-pod-network.ba002ea2e935c9d6f5063900e5a1409434dc2a3b95d7e58a1fe61e771a6258e8" Workload="ci--4081.3.6--n--8271a56a8b-k8s-coredns--7d764666f9--zrdvd-eth0" Mar 7 01:21:20.433320 containerd[1696]: 2026-03-07 01:21:20.357 [INFO][4364] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ba002ea2e935c9d6f5063900e5a1409434dc2a3b95d7e58a1fe61e771a6258e8" Namespace="kube-system" Pod="coredns-7d764666f9-zrdvd" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-coredns--7d764666f9--zrdvd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8271a56a8b-k8s-coredns--7d764666f9--zrdvd-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"20284904-5e50-41d2-bb96-a70478b6f953", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 20, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8271a56a8b", ContainerID:"", Pod:"coredns-7d764666f9-zrdvd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliefaaa934eb2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:21:20.433320 containerd[1696]: 2026-03-07 01:21:20.360 [INFO][4364] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.71/32] ContainerID="ba002ea2e935c9d6f5063900e5a1409434dc2a3b95d7e58a1fe61e771a6258e8" Namespace="kube-system" Pod="coredns-7d764666f9-zrdvd" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-coredns--7d764666f9--zrdvd-eth0" Mar 7 01:21:20.433320 containerd[1696]: 2026-03-07 01:21:20.360 [INFO][4364] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliefaaa934eb2 ContainerID="ba002ea2e935c9d6f5063900e5a1409434dc2a3b95d7e58a1fe61e771a6258e8" Namespace="kube-system" Pod="coredns-7d764666f9-zrdvd" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-coredns--7d764666f9--zrdvd-eth0" Mar 7 01:21:20.433320 containerd[1696]: 2026-03-07 01:21:20.366 [INFO][4364] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ba002ea2e935c9d6f5063900e5a1409434dc2a3b95d7e58a1fe61e771a6258e8" Namespace="kube-system" Pod="coredns-7d764666f9-zrdvd" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-coredns--7d764666f9--zrdvd-eth0" Mar 7 01:21:20.433320 containerd[1696]: 2026-03-07 01:21:20.377 [INFO][4364] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ba002ea2e935c9d6f5063900e5a1409434dc2a3b95d7e58a1fe61e771a6258e8" Namespace="kube-system" Pod="coredns-7d764666f9-zrdvd" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-coredns--7d764666f9--zrdvd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8271a56a8b-k8s-coredns--7d764666f9--zrdvd-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"20284904-5e50-41d2-bb96-a70478b6f953", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 20, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8271a56a8b", ContainerID:"ba002ea2e935c9d6f5063900e5a1409434dc2a3b95d7e58a1fe61e771a6258e8", Pod:"coredns-7d764666f9-zrdvd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliefaaa934eb2", MAC:"1e:f8:6e:8c:01:e1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:21:20.434032 containerd[1696]: 2026-03-07 01:21:20.425 [INFO][4364] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ba002ea2e935c9d6f5063900e5a1409434dc2a3b95d7e58a1fe61e771a6258e8" Namespace="kube-system" Pod="coredns-7d764666f9-zrdvd" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-coredns--7d764666f9--zrdvd-eth0" Mar 7 01:21:20.464666 containerd[1696]: time="2026-03-07T01:21:20.463137926Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:21:20.464666 containerd[1696]: time="2026-03-07T01:21:20.463206127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:21:20.464666 containerd[1696]: time="2026-03-07T01:21:20.463241628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:21:20.464666 containerd[1696]: time="2026-03-07T01:21:20.463332929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:21:20.499298 systemd[1]: Started cri-containerd-41ca4f3a9fc31ebf280e30cefeed3cadbebd6b55b5037b84602c20e9d369318f.scope - libcontainer container 41ca4f3a9fc31ebf280e30cefeed3cadbebd6b55b5037b84602c20e9d369318f. Mar 7 01:21:20.561344 systemd[1]: Started cri-containerd-29dfd50fc1635fc37af441f1cd31851aa8515139491996909b540e70dfbd4b3a.scope - libcontainer container 29dfd50fc1635fc37af441f1cd31851aa8515139491996909b540e70dfbd4b3a. Mar 7 01:21:20.592592 containerd[1696]: time="2026-03-07T01:21:20.592073745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8685cf67d4-nwltl,Uid:91b8f873-2f68-41cb-bf26-c02fe2af4478,Namespace:calico-system,Attempt:0,} returns sandbox id \"c0dfe006060fd3916dd5a4904928b7c43a2799607dd9a0858bdec60d737595c1\"" Mar 7 01:21:20.601327 containerd[1696]: time="2026-03-07T01:21:20.601209173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:21:20.601883 containerd[1696]: time="2026-03-07T01:21:20.601810482Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:21:20.603022 containerd[1696]: time="2026-03-07T01:21:20.602955198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:21:20.603443 containerd[1696]: time="2026-03-07T01:21:20.603324403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:21:20.634557 containerd[1696]: time="2026-03-07T01:21:20.634264640Z" level=info msg="StartContainer for \"41ca4f3a9fc31ebf280e30cefeed3cadbebd6b55b5037b84602c20e9d369318f\" returns successfully" Mar 7 01:21:20.672318 systemd[1]: Started cri-containerd-ba002ea2e935c9d6f5063900e5a1409434dc2a3b95d7e58a1fe61e771a6258e8.scope - libcontainer container ba002ea2e935c9d6f5063900e5a1409434dc2a3b95d7e58a1fe61e771a6258e8. Mar 7 01:21:20.776306 systemd-networkd[1329]: calibffaed1c943: Gained IPv6LL Mar 7 01:21:20.786792 containerd[1696]: time="2026-03-07T01:21:20.784928350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-zrdvd,Uid:20284904-5e50-41d2-bb96-a70478b6f953,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba002ea2e935c9d6f5063900e5a1409434dc2a3b95d7e58a1fe61e771a6258e8\"" Mar 7 01:21:20.794700 containerd[1696]: time="2026-03-07T01:21:20.794549184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-5bpvw,Uid:c7e196a9-b819-4eb7-9c05-9cc149a1f20e,Namespace:calico-system,Attempt:0,} returns sandbox id \"29dfd50fc1635fc37af441f1cd31851aa8515139491996909b540e70dfbd4b3a\"" Mar 7 01:21:20.798733 containerd[1696]: time="2026-03-07T01:21:20.798686142Z" level=info msg="CreateContainer within sandbox \"ba002ea2e935c9d6f5063900e5a1409434dc2a3b95d7e58a1fe61e771a6258e8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:21:20.821209 systemd-networkd[1329]: caliddbb862c03f: Link UP Mar 7 01:21:20.821510 systemd-networkd[1329]: caliddbb862c03f: Gained carrier Mar 7 01:21:20.839696 containerd[1696]: time="2026-03-07T01:21:20.839597711Z" level=info msg="CreateContainer within sandbox \"ba002ea2e935c9d6f5063900e5a1409434dc2a3b95d7e58a1fe61e771a6258e8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"df0553cdbf96bd2e601cb31af27b44b63bd315729e73609aa19f0729c5b6b63f\"" Mar 7 01:21:20.841895 containerd[1696]: time="2026-03-07T01:21:20.841858243Z" level=info msg="StartContainer for \"df0553cdbf96bd2e601cb31af27b44b63bd315729e73609aa19f0729c5b6b63f\"" Mar 7 01:21:20.845119 containerd[1696]: 2026-03-07 01:21:20.559 [ERROR][4642] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:21:20.845119 containerd[1696]: 2026-03-07 01:21:20.591 [INFO][4642] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--8271a56a8b-k8s-csi--node--driver--l6gbz-eth0 csi-node-driver- calico-system 7ade0512-4109-41d7-94b7-3d8c1b62c155 900 0 2026-03-07 01:20:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-8271a56a8b csi-node-driver-l6gbz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliddbb862c03f [] [] }} ContainerID="46dc43e88af24ffdddefe59ae8f8ee529076997df6bf94a11133492380e66133" Namespace="calico-system" Pod="csi-node-driver-l6gbz" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-csi--node--driver--l6gbz-" Mar 7 01:21:20.845119 containerd[1696]: 2026-03-07 01:21:20.592 [INFO][4642] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="46dc43e88af24ffdddefe59ae8f8ee529076997df6bf94a11133492380e66133" Namespace="calico-system" Pod="csi-node-driver-l6gbz" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-csi--node--driver--l6gbz-eth0" Mar 7 01:21:20.845119 containerd[1696]: 2026-03-07 01:21:20.716 [INFO][4768] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="46dc43e88af24ffdddefe59ae8f8ee529076997df6bf94a11133492380e66133" HandleID="k8s-pod-network.46dc43e88af24ffdddefe59ae8f8ee529076997df6bf94a11133492380e66133" Workload="ci--4081.3.6--n--8271a56a8b-k8s-csi--node--driver--l6gbz-eth0" Mar 7 01:21:20.845119 containerd[1696]: 2026-03-07 01:21:20.744 [INFO][4768] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="46dc43e88af24ffdddefe59ae8f8ee529076997df6bf94a11133492380e66133" HandleID="k8s-pod-network.46dc43e88af24ffdddefe59ae8f8ee529076997df6bf94a11133492380e66133" Workload="ci--4081.3.6--n--8271a56a8b-k8s-csi--node--driver--l6gbz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039f870), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-8271a56a8b", "pod":"csi-node-driver-l6gbz", "timestamp":"2026-03-07 01:21:20.716473498 +0000 UTC"}, Hostname:"ci-4081.3.6-n-8271a56a8b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002d4580)} Mar 7 01:21:20.845119 containerd[1696]: 2026-03-07 01:21:20.744 [INFO][4768] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:21:20.845119 containerd[1696]: 2026-03-07 01:21:20.744 [INFO][4768] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:21:20.845119 containerd[1696]: 2026-03-07 01:21:20.745 [INFO][4768] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-8271a56a8b' Mar 7 01:21:20.845119 containerd[1696]: 2026-03-07 01:21:20.752 [INFO][4768] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.46dc43e88af24ffdddefe59ae8f8ee529076997df6bf94a11133492380e66133" host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:20.845119 containerd[1696]: 2026-03-07 01:21:20.760 [INFO][4768] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:20.845119 containerd[1696]: 2026-03-07 01:21:20.779 [INFO][4768] ipam/ipam.go 526: Trying affinity for 192.168.45.64/26 host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:20.845119 containerd[1696]: 2026-03-07 01:21:20.782 [INFO][4768] ipam/ipam.go 160: Attempting to load block cidr=192.168.45.64/26 host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:20.845119 containerd[1696]: 2026-03-07 01:21:20.790 [INFO][4768] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.45.64/26 host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:20.845119 containerd[1696]: 2026-03-07 01:21:20.790 [INFO][4768] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.45.64/26 handle="k8s-pod-network.46dc43e88af24ffdddefe59ae8f8ee529076997df6bf94a11133492380e66133" host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:20.845119 containerd[1696]: 2026-03-07 01:21:20.794 [INFO][4768] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.46dc43e88af24ffdddefe59ae8f8ee529076997df6bf94a11133492380e66133 Mar 7 01:21:20.845119 containerd[1696]: 2026-03-07 01:21:20.802 [INFO][4768] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.45.64/26 handle="k8s-pod-network.46dc43e88af24ffdddefe59ae8f8ee529076997df6bf94a11133492380e66133" host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:20.845119 containerd[1696]: 2026-03-07 01:21:20.813 [INFO][4768] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.45.72/26] block=192.168.45.64/26 handle="k8s-pod-network.46dc43e88af24ffdddefe59ae8f8ee529076997df6bf94a11133492380e66133" host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:20.845119 containerd[1696]: 2026-03-07 01:21:20.814 [INFO][4768] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.45.72/26] handle="k8s-pod-network.46dc43e88af24ffdddefe59ae8f8ee529076997df6bf94a11133492380e66133" host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:20.845119 containerd[1696]: 2026-03-07 01:21:20.815 [INFO][4768] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:21:20.845119 containerd[1696]: 2026-03-07 01:21:20.815 [INFO][4768] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.45.72/26] IPv6=[] ContainerID="46dc43e88af24ffdddefe59ae8f8ee529076997df6bf94a11133492380e66133" HandleID="k8s-pod-network.46dc43e88af24ffdddefe59ae8f8ee529076997df6bf94a11133492380e66133" Workload="ci--4081.3.6--n--8271a56a8b-k8s-csi--node--driver--l6gbz-eth0" Mar 7 01:21:20.846128 containerd[1696]: 2026-03-07 01:21:20.816 [INFO][4642] cni-plugin/k8s.go 418: Populated endpoint ContainerID="46dc43e88af24ffdddefe59ae8f8ee529076997df6bf94a11133492380e66133" Namespace="calico-system" Pod="csi-node-driver-l6gbz" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-csi--node--driver--l6gbz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8271a56a8b-k8s-csi--node--driver--l6gbz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7ade0512-4109-41d7-94b7-3d8c1b62c155", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 20, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8271a56a8b", ContainerID:"", Pod:"csi-node-driver-l6gbz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.45.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliddbb862c03f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:21:20.846128 containerd[1696]: 2026-03-07 01:21:20.816 [INFO][4642] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.72/32] ContainerID="46dc43e88af24ffdddefe59ae8f8ee529076997df6bf94a11133492380e66133" Namespace="calico-system" Pod="csi-node-driver-l6gbz" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-csi--node--driver--l6gbz-eth0" Mar 7 01:21:20.846128 containerd[1696]: 2026-03-07 01:21:20.816 [INFO][4642] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliddbb862c03f ContainerID="46dc43e88af24ffdddefe59ae8f8ee529076997df6bf94a11133492380e66133" Namespace="calico-system" Pod="csi-node-driver-l6gbz" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-csi--node--driver--l6gbz-eth0" Mar 7 01:21:20.846128 containerd[1696]: 2026-03-07 01:21:20.821 [INFO][4642] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="46dc43e88af24ffdddefe59ae8f8ee529076997df6bf94a11133492380e66133" Namespace="calico-system" Pod="csi-node-driver-l6gbz" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-csi--node--driver--l6gbz-eth0" Mar 7 01:21:20.846128 containerd[1696]: 2026-03-07 01:21:20.822 [INFO][4642] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="46dc43e88af24ffdddefe59ae8f8ee529076997df6bf94a11133492380e66133" Namespace="calico-system" Pod="csi-node-driver-l6gbz" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-csi--node--driver--l6gbz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8271a56a8b-k8s-csi--node--driver--l6gbz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7ade0512-4109-41d7-94b7-3d8c1b62c155", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 20, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8271a56a8b", ContainerID:"46dc43e88af24ffdddefe59ae8f8ee529076997df6bf94a11133492380e66133", Pod:"csi-node-driver-l6gbz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.45.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliddbb862c03f", MAC:"1e:c8:cc:ae:1a:a0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:21:20.846128 containerd[1696]: 2026-03-07 01:21:20.839 [INFO][4642] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="46dc43e88af24ffdddefe59ae8f8ee529076997df6bf94a11133492380e66133" Namespace="calico-system" Pod="csi-node-driver-l6gbz" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-csi--node--driver--l6gbz-eth0" Mar 7 01:21:20.878856 containerd[1696]: time="2026-03-07T01:21:20.877432738Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:21:20.878856 containerd[1696]: time="2026-03-07T01:21:20.877575940Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:21:20.878856 containerd[1696]: time="2026-03-07T01:21:20.877639941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:21:20.878856 containerd[1696]: time="2026-03-07T01:21:20.877823243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:21:20.882809 systemd[1]: Started cri-containerd-df0553cdbf96bd2e601cb31af27b44b63bd315729e73609aa19f0729c5b6b63f.scope - libcontainer container df0553cdbf96bd2e601cb31af27b44b63bd315729e73609aa19f0729c5b6b63f. Mar 7 01:21:20.911294 systemd[1]: Started cri-containerd-46dc43e88af24ffdddefe59ae8f8ee529076997df6bf94a11133492380e66133.scope - libcontainer container 46dc43e88af24ffdddefe59ae8f8ee529076997df6bf94a11133492380e66133. Mar 7 01:21:20.935247 containerd[1696]: time="2026-03-07T01:21:20.934956338Z" level=info msg="StartContainer for \"df0553cdbf96bd2e601cb31af27b44b63bd315729e73609aa19f0729c5b6b63f\" returns successfully" Mar 7 01:21:20.960533 containerd[1696]: time="2026-03-07T01:21:20.960140689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l6gbz,Uid:7ade0512-4109-41d7-94b7-3d8c1b62c155,Namespace:calico-system,Attempt:1,} returns sandbox id \"46dc43e88af24ffdddefe59ae8f8ee529076997df6bf94a11133492380e66133\"" Mar 7 01:21:21.032284 systemd-networkd[1329]: cali479b4be1432: Gained IPv6LL Mar 7 01:21:21.399722 systemd[1]: run-containerd-runc-k8s.io-dd8caea8b37c919a9957e40dadfe1e9f1cd3565a693def6eb3c019ef00e4aabb-runc.Trqpt8.mount: Deactivated successfully. Mar 7 01:21:21.417280 systemd-networkd[1329]: calia91edd9d389: Gained IPv6LL Mar 7 01:21:21.462920 kubelet[3226]: I0307 01:21:21.462449 3226 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-6wzp5" podStartSLOduration=41.46243178 podStartE2EDuration="41.46243178s" podCreationTimestamp="2026-03-07 01:20:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:21:21.461932673 +0000 UTC m=+44.390185135" watchObservedRunningTime="2026-03-07 01:21:21.46243178 +0000 UTC m=+44.390684242" Mar 7 01:21:21.672845 systemd-networkd[1329]: cali12f819bc079: Gained IPv6LL Mar 7 01:21:21.673573 systemd-networkd[1329]: calid4d0dc2a4cb: Gained IPv6LL Mar 7 01:21:21.800363 systemd-networkd[1329]: cali848eae0ca76: Gained IPv6LL Mar 7 01:21:21.920185 kernel: calico-node[5001]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 7 01:21:21.992429 systemd-networkd[1329]: caliefaaa934eb2: Gained IPv6LL Mar 7 01:21:22.249261 systemd-networkd[1329]: caliddbb862c03f: Gained IPv6LL Mar 7 01:21:22.499989 systemd-networkd[1329]: vxlan.calico: Link UP Mar 7 01:21:22.500000 systemd-networkd[1329]: vxlan.calico: Gained carrier Mar 7 01:21:23.784586 systemd-networkd[1329]: vxlan.calico: Gained IPv6LL Mar 7 01:21:23.905752 containerd[1696]: time="2026-03-07T01:21:23.905693486Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:21:23.908172 containerd[1696]: time="2026-03-07T01:21:23.908090319Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 7 01:21:23.911282 containerd[1696]: time="2026-03-07T01:21:23.911216963Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:21:23.915403 containerd[1696]: time="2026-03-07T01:21:23.915345620Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:21:23.916270 containerd[1696]: time="2026-03-07T01:21:23.916119731Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 3.625517138s" Mar 7 01:21:23.916270 containerd[1696]: time="2026-03-07T01:21:23.916161332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 7 01:21:23.917902 containerd[1696]: time="2026-03-07T01:21:23.917772954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 01:21:23.923945 containerd[1696]: time="2026-03-07T01:21:23.923874239Z" level=info msg="CreateContainer within sandbox \"774af927cf2029f8511fd675d0b71663490a72fe752f9194eb64e00e6bfbee86\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 01:21:23.962626 containerd[1696]: time="2026-03-07T01:21:23.962574778Z" level=info msg="CreateContainer within sandbox \"774af927cf2029f8511fd675d0b71663490a72fe752f9194eb64e00e6bfbee86\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"201e389aa88d71737346bcb4b0251b2dc66559936cbe68f3852813d7faaa8398\"" Mar 7 01:21:23.964604 containerd[1696]: time="2026-03-07T01:21:23.963322888Z" level=info msg="StartContainer for \"201e389aa88d71737346bcb4b0251b2dc66559936cbe68f3852813d7faaa8398\"" Mar 7 01:21:24.000326 systemd[1]: run-containerd-runc-k8s.io-201e389aa88d71737346bcb4b0251b2dc66559936cbe68f3852813d7faaa8398-runc.e7e0Tu.mount: Deactivated successfully. Mar 7 01:21:24.006276 systemd[1]: Started cri-containerd-201e389aa88d71737346bcb4b0251b2dc66559936cbe68f3852813d7faaa8398.scope - libcontainer container 201e389aa88d71737346bcb4b0251b2dc66559936cbe68f3852813d7faaa8398. Mar 7 01:21:24.057279 containerd[1696]: time="2026-03-07T01:21:24.056032979Z" level=info msg="StartContainer for \"201e389aa88d71737346bcb4b0251b2dc66559936cbe68f3852813d7faaa8398\" returns successfully" Mar 7 01:21:24.231997 containerd[1696]: time="2026-03-07T01:21:24.231933927Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:21:24.236119 containerd[1696]: time="2026-03-07T01:21:24.234912168Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 7 01:21:24.237416 containerd[1696]: time="2026-03-07T01:21:24.237373403Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 319.551247ms" Mar 7 01:21:24.237532 containerd[1696]: time="2026-03-07T01:21:24.237427603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 7 01:21:24.240315 containerd[1696]: time="2026-03-07T01:21:24.240284543Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 7 01:21:24.249605 containerd[1696]: time="2026-03-07T01:21:24.249556772Z" level=info msg="CreateContainer within sandbox \"6853aeca27dc874d77fca6bfe46eb51796cbe4d9ab9adec93e72b6070d44d0c8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 01:21:24.291070 containerd[1696]: time="2026-03-07T01:21:24.290975749Z" level=info msg="CreateContainer within sandbox \"6853aeca27dc874d77fca6bfe46eb51796cbe4d9ab9adec93e72b6070d44d0c8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5f5a0894ebf657ca728444fdfbddd0a4adc0940501493f87d0b1d92fe849d6b2\"" Mar 7 01:21:24.295877 containerd[1696]: time="2026-03-07T01:21:24.293459183Z" level=info msg="StartContainer for \"5f5a0894ebf657ca728444fdfbddd0a4adc0940501493f87d0b1d92fe849d6b2\"" Mar 7 01:21:24.330304 systemd[1]: Started cri-containerd-5f5a0894ebf657ca728444fdfbddd0a4adc0940501493f87d0b1d92fe849d6b2.scope - libcontainer container 5f5a0894ebf657ca728444fdfbddd0a4adc0940501493f87d0b1d92fe849d6b2. Mar 7 01:21:24.392664 containerd[1696]: time="2026-03-07T01:21:24.392614363Z" level=info msg="StartContainer for \"5f5a0894ebf657ca728444fdfbddd0a4adc0940501493f87d0b1d92fe849d6b2\" returns successfully" Mar 7 01:21:24.491725 kubelet[3226]: I0307 01:21:24.491161 3226 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-zrdvd" podStartSLOduration=44.491139135 podStartE2EDuration="44.491139135s" podCreationTimestamp="2026-03-07 01:20:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:21:21.518575961 +0000 UTC m=+44.446828423" watchObservedRunningTime="2026-03-07 01:21:24.491139135 +0000 UTC m=+47.419391597" Mar 7 01:21:24.509908 kubelet[3226]: I0307 01:21:24.509830 3226 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-6b8468fddd-6rqlm" podStartSLOduration=29.872370888 podStartE2EDuration="33.509811294s" podCreationTimestamp="2026-03-07 01:20:51 +0000 UTC" firstStartedPulling="2026-03-07 01:21:20.27973374 +0000 UTC m=+43.207986102" lastFinishedPulling="2026-03-07 01:21:23.917174046 +0000 UTC m=+46.845426508" observedRunningTime="2026-03-07 01:21:24.492319551 +0000 UTC m=+47.420572013" watchObservedRunningTime="2026-03-07 01:21:24.509811294 +0000 UTC m=+47.438063756" Mar 7 01:21:24.952141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount773883493.mount: Deactivated successfully. Mar 7 01:21:25.474084 kubelet[3226]: I0307 01:21:25.474048 3226 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:21:25.474499 kubelet[3226]: I0307 01:21:25.474048 3226 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:21:25.655142 containerd[1696]: time="2026-03-07T01:21:25.655063934Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:21:25.658398 containerd[1696]: time="2026-03-07T01:21:25.658315880Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 7 01:21:25.660829 containerd[1696]: time="2026-03-07T01:21:25.660761814Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:21:25.666465 containerd[1696]: time="2026-03-07T01:21:25.666021187Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:21:25.668648 containerd[1696]: time="2026-03-07T01:21:25.667659810Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.427335867s" Mar 7 01:21:25.668648 containerd[1696]: time="2026-03-07T01:21:25.667710410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 7 01:21:25.670549 containerd[1696]: time="2026-03-07T01:21:25.670506149Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 7 01:21:25.677182 containerd[1696]: time="2026-03-07T01:21:25.677144542Z" level=info msg="CreateContainer within sandbox \"86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 7 01:21:25.717066 containerd[1696]: time="2026-03-07T01:21:25.717012097Z" level=info msg="CreateContainer within sandbox \"86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"2c5e49635e6b54a604a00ace5264b4a5bf42dfa5082f3d293feef5b1c97cf35e\"" Mar 7 01:21:25.719373 containerd[1696]: time="2026-03-07T01:21:25.717953110Z" level=info msg="StartContainer for \"2c5e49635e6b54a604a00ace5264b4a5bf42dfa5082f3d293feef5b1c97cf35e\"" Mar 7 01:21:25.757280 systemd[1]: Started cri-containerd-2c5e49635e6b54a604a00ace5264b4a5bf42dfa5082f3d293feef5b1c97cf35e.scope - libcontainer container 2c5e49635e6b54a604a00ace5264b4a5bf42dfa5082f3d293feef5b1c97cf35e. Mar 7 01:21:25.835978 containerd[1696]: time="2026-03-07T01:21:25.835897151Z" level=info msg="StartContainer for \"2c5e49635e6b54a604a00ace5264b4a5bf42dfa5082f3d293feef5b1c97cf35e\" returns successfully" Mar 7 01:21:28.980314 containerd[1696]: time="2026-03-07T01:21:28.980259971Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:21:28.983524 containerd[1696]: time="2026-03-07T01:21:28.983439714Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 7 01:21:28.986443 containerd[1696]: time="2026-03-07T01:21:28.986375353Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:21:28.991872 containerd[1696]: time="2026-03-07T01:21:28.991810026Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:21:28.993133 containerd[1696]: time="2026-03-07T01:21:28.992611937Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 3.322053986s" Mar 7 01:21:28.993133 containerd[1696]: time="2026-03-07T01:21:28.992655937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 7 01:21:28.993935 containerd[1696]: time="2026-03-07T01:21:28.993908754Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 7 01:21:29.017295 containerd[1696]: time="2026-03-07T01:21:29.017252766Z" level=info msg="CreateContainer within sandbox \"c0dfe006060fd3916dd5a4904928b7c43a2799607dd9a0858bdec60d737595c1\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 7 01:21:29.049919 containerd[1696]: time="2026-03-07T01:21:29.049867702Z" level=info msg="CreateContainer within sandbox \"c0dfe006060fd3916dd5a4904928b7c43a2799607dd9a0858bdec60d737595c1\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"dad8b507375119cfe4147ac92d1df2205bc8316a5784ec246059e9ea9a0d99bf\"" Mar 7 01:21:29.050958 containerd[1696]: time="2026-03-07T01:21:29.050631912Z" level=info msg="StartContainer for \"dad8b507375119cfe4147ac92d1df2205bc8316a5784ec246059e9ea9a0d99bf\"" Mar 7 01:21:29.091296 systemd[1]: Started cri-containerd-dad8b507375119cfe4147ac92d1df2205bc8316a5784ec246059e9ea9a0d99bf.scope - libcontainer container dad8b507375119cfe4147ac92d1df2205bc8316a5784ec246059e9ea9a0d99bf. Mar 7 01:21:29.137199 containerd[1696]: time="2026-03-07T01:21:29.136838764Z" level=info msg="StartContainer for \"dad8b507375119cfe4147ac92d1df2205bc8316a5784ec246059e9ea9a0d99bf\" returns successfully" Mar 7 01:21:29.507965 kubelet[3226]: I0307 01:21:29.507882 3226 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-6b8468fddd-sh56r" podStartSLOduration=34.566746694 podStartE2EDuration="38.507860223s" podCreationTimestamp="2026-03-07 01:20:51 +0000 UTC" firstStartedPulling="2026-03-07 01:21:20.297314688 +0000 UTC m=+43.225567150" lastFinishedPulling="2026-03-07 01:21:24.238428317 +0000 UTC m=+47.166680679" observedRunningTime="2026-03-07 01:21:24.511694521 +0000 UTC m=+47.439946983" watchObservedRunningTime="2026-03-07 01:21:29.507860223 +0000 UTC m=+52.436112685" Mar 7 01:21:29.552991 kubelet[3226]: I0307 01:21:29.552713 3226 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-8685cf67d4-nwltl" podStartSLOduration=29.15418686 podStartE2EDuration="37.552462919s" podCreationTimestamp="2026-03-07 01:20:52 +0000 UTC" firstStartedPulling="2026-03-07 01:21:20.595443692 +0000 UTC m=+43.523696054" lastFinishedPulling="2026-03-07 01:21:28.993719651 +0000 UTC m=+51.921972113" observedRunningTime="2026-03-07 01:21:29.510396457 +0000 UTC m=+52.438648819" watchObservedRunningTime="2026-03-07 01:21:29.552462919 +0000 UTC m=+52.480715381" Mar 7 01:21:31.404844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2689550453.mount: Deactivated successfully. Mar 7 01:21:31.907318 containerd[1696]: time="2026-03-07T01:21:31.907264593Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:21:31.912865 containerd[1696]: time="2026-03-07T01:21:31.912786867Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 7 01:21:31.915753 containerd[1696]: time="2026-03-07T01:21:31.915698506Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:21:31.920577 containerd[1696]: time="2026-03-07T01:21:31.920501870Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:21:31.922055 containerd[1696]: time="2026-03-07T01:21:31.921247380Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 2.925291499s" Mar 7 01:21:31.922055 containerd[1696]: time="2026-03-07T01:21:31.921287980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 7 01:21:31.924022 containerd[1696]: time="2026-03-07T01:21:31.923996716Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 7 01:21:31.930064 containerd[1696]: time="2026-03-07T01:21:31.930034097Z" level=info msg="CreateContainer within sandbox \"29dfd50fc1635fc37af441f1cd31851aa8515139491996909b540e70dfbd4b3a\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 7 01:21:31.963699 containerd[1696]: time="2026-03-07T01:21:31.963642546Z" level=info msg="CreateContainer within sandbox \"29dfd50fc1635fc37af441f1cd31851aa8515139491996909b540e70dfbd4b3a\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"e82b4e7e7b4db366cbe15dac873ace56537e54a83d03b9da9a20b504e7034255\"" Mar 7 01:21:31.964475 containerd[1696]: time="2026-03-07T01:21:31.964420657Z" level=info msg="StartContainer for \"e82b4e7e7b4db366cbe15dac873ace56537e54a83d03b9da9a20b504e7034255\"" Mar 7 01:21:32.000272 systemd[1]: Started cri-containerd-e82b4e7e7b4db366cbe15dac873ace56537e54a83d03b9da9a20b504e7034255.scope - libcontainer container e82b4e7e7b4db366cbe15dac873ace56537e54a83d03b9da9a20b504e7034255. Mar 7 01:21:32.047757 containerd[1696]: time="2026-03-07T01:21:32.047705270Z" level=info msg="StartContainer for \"e82b4e7e7b4db366cbe15dac873ace56537e54a83d03b9da9a20b504e7034255\" returns successfully" Mar 7 01:21:33.422732 containerd[1696]: time="2026-03-07T01:21:33.422678147Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:21:33.425485 containerd[1696]: time="2026-03-07T01:21:33.425358283Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 7 01:21:33.428543 containerd[1696]: time="2026-03-07T01:21:33.428489625Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:21:33.433547 containerd[1696]: time="2026-03-07T01:21:33.433481992Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:21:33.434443 containerd[1696]: time="2026-03-07T01:21:33.434170501Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.510027483s" Mar 7 01:21:33.434443 containerd[1696]: time="2026-03-07T01:21:33.434211801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 7 01:21:33.436397 containerd[1696]: time="2026-03-07T01:21:33.436256229Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 7 01:21:33.442356 containerd[1696]: time="2026-03-07T01:21:33.442220509Z" level=info msg="CreateContainer within sandbox \"46dc43e88af24ffdddefe59ae8f8ee529076997df6bf94a11133492380e66133\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 7 01:21:33.477235 containerd[1696]: time="2026-03-07T01:21:33.477191276Z" level=info msg="CreateContainer within sandbox \"46dc43e88af24ffdddefe59ae8f8ee529076997df6bf94a11133492380e66133\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"792156a222bd558f6ced38598fe55538e16fcb0132ecdc4f721c309b064374a7\"" Mar 7 01:21:33.479167 containerd[1696]: time="2026-03-07T01:21:33.477825884Z" level=info msg="StartContainer for \"792156a222bd558f6ced38598fe55538e16fcb0132ecdc4f721c309b064374a7\"" Mar 7 01:21:33.515287 systemd[1]: Started cri-containerd-792156a222bd558f6ced38598fe55538e16fcb0132ecdc4f721c309b064374a7.scope - libcontainer container 792156a222bd558f6ced38598fe55538e16fcb0132ecdc4f721c309b064374a7. Mar 7 01:21:33.571549 containerd[1696]: time="2026-03-07T01:21:33.571494736Z" level=info msg="StartContainer for \"792156a222bd558f6ced38598fe55538e16fcb0132ecdc4f721c309b064374a7\" returns successfully" Mar 7 01:21:35.971353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3660195767.mount: Deactivated successfully. Mar 7 01:21:36.035039 containerd[1696]: time="2026-03-07T01:21:36.034541357Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:21:36.037716 containerd[1696]: time="2026-03-07T01:21:36.037526397Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 7 01:21:36.041283 containerd[1696]: time="2026-03-07T01:21:36.040985043Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:21:36.045655 containerd[1696]: time="2026-03-07T01:21:36.045597004Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:21:36.046974 containerd[1696]: time="2026-03-07T01:21:36.046421115Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 2.610126686s" Mar 7 01:21:36.046974 containerd[1696]: time="2026-03-07T01:21:36.046466616Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 7 01:21:36.048315 containerd[1696]: time="2026-03-07T01:21:36.048260040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 7 01:21:36.055775 containerd[1696]: time="2026-03-07T01:21:36.055733340Z" level=info msg="CreateContainer within sandbox \"86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 7 01:21:36.097201 containerd[1696]: time="2026-03-07T01:21:36.097031492Z" level=info msg="CreateContainer within sandbox \"86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"9860089e58c3114bf34989ddbc42f23e78c7920f5e080d6cb64bd4eb51de593a\"" Mar 7 01:21:36.099303 containerd[1696]: time="2026-03-07T01:21:36.097902303Z" level=info msg="StartContainer for \"9860089e58c3114bf34989ddbc42f23e78c7920f5e080d6cb64bd4eb51de593a\"" Mar 7 01:21:36.142284 systemd[1]: Started cri-containerd-9860089e58c3114bf34989ddbc42f23e78c7920f5e080d6cb64bd4eb51de593a.scope - libcontainer container 9860089e58c3114bf34989ddbc42f23e78c7920f5e080d6cb64bd4eb51de593a. Mar 7 01:21:36.192525 containerd[1696]: time="2026-03-07T01:21:36.192464267Z" level=info msg="StartContainer for \"9860089e58c3114bf34989ddbc42f23e78c7920f5e080d6cb64bd4eb51de593a\" returns successfully" Mar 7 01:21:36.528429 containerd[1696]: time="2026-03-07T01:21:36.528172054Z" level=info msg="StopContainer for \"2c5e49635e6b54a604a00ace5264b4a5bf42dfa5082f3d293feef5b1c97cf35e\" with timeout 30 (s)" Mar 7 01:21:36.529138 containerd[1696]: time="2026-03-07T01:21:36.528362557Z" level=info msg="StopContainer for \"9860089e58c3114bf34989ddbc42f23e78c7920f5e080d6cb64bd4eb51de593a\" with timeout 30 (s)" Mar 7 01:21:36.530311 containerd[1696]: time="2026-03-07T01:21:36.530183781Z" level=info msg="Stop container \"2c5e49635e6b54a604a00ace5264b4a5bf42dfa5082f3d293feef5b1c97cf35e\" with signal terminated" Mar 7 01:21:36.530881 containerd[1696]: time="2026-03-07T01:21:36.530757289Z" level=info msg="Stop container \"9860089e58c3114bf34989ddbc42f23e78c7920f5e080d6cb64bd4eb51de593a\" with signal terminated" Mar 7 01:21:36.544221 systemd[1]: cri-containerd-9860089e58c3114bf34989ddbc42f23e78c7920f5e080d6cb64bd4eb51de593a.scope: Deactivated successfully. Mar 7 01:21:36.567484 kubelet[3226]: I0307 01:21:36.567400 3226 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-5bpvw" podStartSLOduration=34.441812699 podStartE2EDuration="45.567380178s" podCreationTimestamp="2026-03-07 01:20:51 +0000 UTC" firstStartedPulling="2026-03-07 01:21:20.796710414 +0000 UTC m=+43.724962776" lastFinishedPulling="2026-03-07 01:21:31.922277793 +0000 UTC m=+54.850530255" observedRunningTime="2026-03-07 01:21:32.533340861 +0000 UTC m=+55.461593323" watchObservedRunningTime="2026-03-07 01:21:36.567380178 +0000 UTC m=+59.495632540" Mar 7 01:21:36.568342 kubelet[3226]: I0307 01:21:36.567518 3226 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-65d4c8b8c7-bfgrz" podStartSLOduration=25.939871766 podStartE2EDuration="41.56751168s" podCreationTimestamp="2026-03-07 01:20:55 +0000 UTC" firstStartedPulling="2026-03-07 01:21:20.419938617 +0000 UTC m=+43.348191079" lastFinishedPulling="2026-03-07 01:21:36.047578631 +0000 UTC m=+58.975830993" observedRunningTime="2026-03-07 01:21:36.562865518 +0000 UTC m=+59.491117880" watchObservedRunningTime="2026-03-07 01:21:36.56751168 +0000 UTC m=+59.495764142" Mar 7 01:21:36.575058 systemd[1]: cri-containerd-2c5e49635e6b54a604a00ace5264b4a5bf42dfa5082f3d293feef5b1c97cf35e.scope: Deactivated successfully. Mar 7 01:21:36.609461 containerd[1696]: time="2026-03-07T01:21:36.609377040Z" level=info msg="shim disconnected" id=2c5e49635e6b54a604a00ace5264b4a5bf42dfa5082f3d293feef5b1c97cf35e namespace=k8s.io Mar 7 01:21:36.609461 containerd[1696]: time="2026-03-07T01:21:36.609447141Z" level=warning msg="cleaning up after shim disconnected" id=2c5e49635e6b54a604a00ace5264b4a5bf42dfa5082f3d293feef5b1c97cf35e namespace=k8s.io Mar 7 01:21:36.609461 containerd[1696]: time="2026-03-07T01:21:36.609461941Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:21:36.656965 kubelet[3226]: I0307 01:21:36.656475 3226 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:21:36.677929 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9860089e58c3114bf34989ddbc42f23e78c7920f5e080d6cb64bd4eb51de593a-rootfs.mount: Deactivated successfully. Mar 7 01:21:37.447883 containerd[1696]: time="2026-03-07T01:21:37.203913204Z" level=info msg="StopPodSandbox for \"faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e\"" Mar 7 01:21:36.678057 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c5e49635e6b54a604a00ace5264b4a5bf42dfa5082f3d293feef5b1c97cf35e-rootfs.mount: Deactivated successfully. Mar 7 01:21:37.482009 containerd[1696]: 2026-03-07 01:21:37.242 [WARNING][5599] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8271a56a8b-k8s-csi--node--driver--l6gbz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7ade0512-4109-41d7-94b7-3d8c1b62c155", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 20, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8271a56a8b", ContainerID:"46dc43e88af24ffdddefe59ae8f8ee529076997df6bf94a11133492380e66133", Pod:"csi-node-driver-l6gbz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.45.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliddbb862c03f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:21:37.482009 containerd[1696]: 2026-03-07 01:21:37.443 [INFO][5599] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e" Mar 7 01:21:37.482009 containerd[1696]: 2026-03-07 01:21:37.443 [INFO][5599] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e" iface="eth0" netns="" Mar 7 01:21:37.482009 containerd[1696]: 2026-03-07 01:21:37.443 [INFO][5599] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e" Mar 7 01:21:37.482009 containerd[1696]: 2026-03-07 01:21:37.443 [INFO][5599] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e" Mar 7 01:21:37.482009 containerd[1696]: 2026-03-07 01:21:37.471 [INFO][5607] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e" HandleID="k8s-pod-network.faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e" Workload="ci--4081.3.6--n--8271a56a8b-k8s-csi--node--driver--l6gbz-eth0" Mar 7 01:21:37.482009 containerd[1696]: 2026-03-07 01:21:37.471 [INFO][5607] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:21:37.482009 containerd[1696]: 2026-03-07 01:21:37.471 [INFO][5607] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:21:37.482009 containerd[1696]: 2026-03-07 01:21:37.478 [WARNING][5607] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e" HandleID="k8s-pod-network.faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e" Workload="ci--4081.3.6--n--8271a56a8b-k8s-csi--node--driver--l6gbz-eth0" Mar 7 01:21:37.482009 containerd[1696]: 2026-03-07 01:21:37.478 [INFO][5607] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e" HandleID="k8s-pod-network.faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e" Workload="ci--4081.3.6--n--8271a56a8b-k8s-csi--node--driver--l6gbz-eth0" Mar 7 01:21:37.482009 containerd[1696]: 2026-03-07 01:21:37.479 [INFO][5607] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:21:37.482009 containerd[1696]: 2026-03-07 01:21:37.480 [INFO][5599] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e" Mar 7 01:21:37.482009 containerd[1696]: time="2026-03-07T01:21:37.482058133Z" level=info msg="TearDown network for sandbox \"faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e\" successfully" Mar 7 01:21:37.482741 containerd[1696]: time="2026-03-07T01:21:37.482103833Z" level=info msg="StopPodSandbox for \"faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e\" returns successfully" Mar 7 01:21:37.482841 containerd[1696]: time="2026-03-07T01:21:37.482811343Z" level=info msg="RemovePodSandbox for \"faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e\"" Mar 7 01:21:37.482926 containerd[1696]: time="2026-03-07T01:21:37.482851843Z" level=info msg="Forcibly stopping sandbox \"faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e\"" Mar 7 01:21:37.844318 containerd[1696]: 2026-03-07 01:21:37.515 [WARNING][5622] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8271a56a8b-k8s-csi--node--driver--l6gbz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7ade0512-4109-41d7-94b7-3d8c1b62c155", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 20, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8271a56a8b", ContainerID:"46dc43e88af24ffdddefe59ae8f8ee529076997df6bf94a11133492380e66133", Pod:"csi-node-driver-l6gbz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.45.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliddbb862c03f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:21:37.844318 containerd[1696]: 2026-03-07 01:21:37.802 [INFO][5622] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e" Mar 7 01:21:37.844318 containerd[1696]: 2026-03-07 01:21:37.802 [INFO][5622] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e" iface="eth0" netns="" Mar 7 01:21:37.844318 containerd[1696]: 2026-03-07 01:21:37.802 [INFO][5622] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e" Mar 7 01:21:37.844318 containerd[1696]: 2026-03-07 01:21:37.802 [INFO][5622] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e" Mar 7 01:21:37.844318 containerd[1696]: 2026-03-07 01:21:37.831 [INFO][5631] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e" HandleID="k8s-pod-network.faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e" Workload="ci--4081.3.6--n--8271a56a8b-k8s-csi--node--driver--l6gbz-eth0" Mar 7 01:21:37.844318 containerd[1696]: 2026-03-07 01:21:37.831 [INFO][5631] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:21:37.844318 containerd[1696]: 2026-03-07 01:21:37.831 [INFO][5631] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:21:37.844318 containerd[1696]: 2026-03-07 01:21:37.839 [WARNING][5631] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e" HandleID="k8s-pod-network.faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e" Workload="ci--4081.3.6--n--8271a56a8b-k8s-csi--node--driver--l6gbz-eth0" Mar 7 01:21:37.844318 containerd[1696]: 2026-03-07 01:21:37.839 [INFO][5631] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e" HandleID="k8s-pod-network.faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e" Workload="ci--4081.3.6--n--8271a56a8b-k8s-csi--node--driver--l6gbz-eth0" Mar 7 01:21:37.844318 containerd[1696]: 2026-03-07 01:21:37.840 [INFO][5631] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:21:37.844318 containerd[1696]: 2026-03-07 01:21:37.842 [INFO][5622] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e" Mar 7 01:21:37.845050 containerd[1696]: time="2026-03-07T01:21:37.844356189Z" level=info msg="TearDown network for sandbox \"faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e\" successfully" Mar 7 01:21:40.044401 containerd[1696]: time="2026-03-07T01:21:40.044182475Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:21:40.045654 containerd[1696]: time="2026-03-07T01:21:40.044856085Z" level=info msg="RemovePodSandbox \"faa1c28d90b1ce9a80dcff7e9e120bfb195fbe6e560c055a567a3b16ae37af9e\" returns successfully" Mar 7 01:21:40.075223 containerd[1696]: time="2026-03-07T01:21:40.075147591Z" level=info msg="shim disconnected" id=9860089e58c3114bf34989ddbc42f23e78c7920f5e080d6cb64bd4eb51de593a namespace=k8s.io Mar 7 01:21:40.075223 containerd[1696]: time="2026-03-07T01:21:40.075210891Z" level=warning msg="cleaning up after shim disconnected" id=9860089e58c3114bf34989ddbc42f23e78c7920f5e080d6cb64bd4eb51de593a namespace=k8s.io Mar 7 01:21:40.075223 containerd[1696]: time="2026-03-07T01:21:40.075224692Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:21:40.078883 containerd[1696]: time="2026-03-07T01:21:40.078725539Z" level=info msg="StopContainer for \"2c5e49635e6b54a604a00ace5264b4a5bf42dfa5082f3d293feef5b1c97cf35e\" returns successfully" Mar 7 01:21:40.098132 containerd[1696]: time="2026-03-07T01:21:40.098054198Z" level=info msg="StopContainer for \"9860089e58c3114bf34989ddbc42f23e78c7920f5e080d6cb64bd4eb51de593a\" returns successfully" Mar 7 01:21:40.098773 containerd[1696]: time="2026-03-07T01:21:40.098741307Z" level=info msg="StopPodSandbox for \"86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e\"" Mar 7 01:21:40.098882 containerd[1696]: time="2026-03-07T01:21:40.098788907Z" level=info msg="Container to stop \"9860089e58c3114bf34989ddbc42f23e78c7920f5e080d6cb64bd4eb51de593a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 01:21:40.098882 containerd[1696]: time="2026-03-07T01:21:40.098806108Z" level=info msg="Container to stop \"2c5e49635e6b54a604a00ace5264b4a5bf42dfa5082f3d293feef5b1c97cf35e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 01:21:40.104130 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e-shm.mount: Deactivated successfully. Mar 7 01:21:40.111310 systemd[1]: cri-containerd-86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e.scope: Deactivated successfully. Mar 7 01:21:40.134029 containerd[1696]: time="2026-03-07T01:21:40.133799177Z" level=info msg="shim disconnected" id=86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e namespace=k8s.io Mar 7 01:21:40.134029 containerd[1696]: time="2026-03-07T01:21:40.133892978Z" level=warning msg="cleaning up after shim disconnected" id=86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e namespace=k8s.io Mar 7 01:21:40.134029 containerd[1696]: time="2026-03-07T01:21:40.133904378Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:21:40.135753 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e-rootfs.mount: Deactivated successfully. Mar 7 01:21:40.221618 systemd-networkd[1329]: cali12f819bc079: Link DOWN Mar 7 01:21:40.221627 systemd-networkd[1329]: cali12f819bc079: Lost carrier Mar 7 01:21:40.338548 containerd[1696]: 2026-03-07 01:21:40.217 [INFO][5691] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" Mar 7 01:21:40.338548 containerd[1696]: 2026-03-07 01:21:40.217 [INFO][5691] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" iface="eth0" netns="/var/run/netns/cni-ea3feff5-cf9d-e55f-73e5-23a479e4ea73" Mar 7 01:21:40.338548 containerd[1696]: 2026-03-07 01:21:40.219 [INFO][5691] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" iface="eth0" netns="/var/run/netns/cni-ea3feff5-cf9d-e55f-73e5-23a479e4ea73" Mar 7 01:21:40.338548 containerd[1696]: 2026-03-07 01:21:40.230 [INFO][5691] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" after=12.303965ms iface="eth0" netns="/var/run/netns/cni-ea3feff5-cf9d-e55f-73e5-23a479e4ea73" Mar 7 01:21:40.338548 containerd[1696]: 2026-03-07 01:21:40.230 [INFO][5691] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" Mar 7 01:21:40.338548 containerd[1696]: 2026-03-07 01:21:40.230 [INFO][5691] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" Mar 7 01:21:40.338548 containerd[1696]: 2026-03-07 01:21:40.283 [INFO][5702] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" HandleID="k8s-pod-network.86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" Workload="ci--4081.3.6--n--8271a56a8b-k8s-whisker--65d4c8b8c7--bfgrz-eth0" Mar 7 01:21:40.338548 containerd[1696]: 2026-03-07 01:21:40.284 [INFO][5702] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:21:40.338548 containerd[1696]: 2026-03-07 01:21:40.284 [INFO][5702] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:21:40.338548 containerd[1696]: 2026-03-07 01:21:40.328 [INFO][5702] ipam/ipam_plugin.go 516: Released address using handleID ContainerID="86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" HandleID="k8s-pod-network.86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" Workload="ci--4081.3.6--n--8271a56a8b-k8s-whisker--65d4c8b8c7--bfgrz-eth0" Mar 7 01:21:40.338548 containerd[1696]: 2026-03-07 01:21:40.328 [INFO][5702] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" HandleID="k8s-pod-network.86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" Workload="ci--4081.3.6--n--8271a56a8b-k8s-whisker--65d4c8b8c7--bfgrz-eth0" Mar 7 01:21:40.338548 containerd[1696]: 2026-03-07 01:21:40.331 [INFO][5702] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:21:40.338548 containerd[1696]: 2026-03-07 01:21:40.335 [INFO][5691] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" Mar 7 01:21:40.340140 containerd[1696]: time="2026-03-07T01:21:40.339468234Z" level=info msg="TearDown network for sandbox \"86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e\" successfully" Mar 7 01:21:40.340140 containerd[1696]: time="2026-03-07T01:21:40.339508934Z" level=info msg="StopPodSandbox for \"86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e\" returns successfully" Mar 7 01:21:40.346055 systemd[1]: run-netns-cni\x2dea3feff5\x2dcf9d\x2de55f\x2d73e5\x2d23a479e4ea73.mount: Deactivated successfully. Mar 7 01:21:40.471846 kubelet[3226]: I0307 01:21:40.471228 3226 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/9aeeed72-eae0-49fc-86f0-39c207a04179-nginx-config\" (UniqueName: \"kubernetes.io/configmap/9aeeed72-eae0-49fc-86f0-39c207a04179-nginx-config\") pod \"9aeeed72-eae0-49fc-86f0-39c207a04179\" (UID: \"9aeeed72-eae0-49fc-86f0-39c207a04179\") " Mar 7 01:21:40.471846 kubelet[3226]: I0307 01:21:40.471298 3226 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/9aeeed72-eae0-49fc-86f0-39c207a04179-kube-api-access-jjwtp\" (UniqueName: \"kubernetes.io/projected/9aeeed72-eae0-49fc-86f0-39c207a04179-kube-api-access-jjwtp\") pod \"9aeeed72-eae0-49fc-86f0-39c207a04179\" (UID: \"9aeeed72-eae0-49fc-86f0-39c207a04179\") " Mar 7 01:21:40.471846 kubelet[3226]: I0307 01:21:40.471326 3226 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/9aeeed72-eae0-49fc-86f0-39c207a04179-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aeeed72-eae0-49fc-86f0-39c207a04179-whisker-ca-bundle\") pod \"9aeeed72-eae0-49fc-86f0-39c207a04179\" (UID: \"9aeeed72-eae0-49fc-86f0-39c207a04179\") " Mar 7 01:21:40.471846 kubelet[3226]: I0307 01:21:40.471355 3226 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/9aeeed72-eae0-49fc-86f0-39c207a04179-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9aeeed72-eae0-49fc-86f0-39c207a04179-whisker-backend-key-pair\") pod \"9aeeed72-eae0-49fc-86f0-39c207a04179\" (UID: \"9aeeed72-eae0-49fc-86f0-39c207a04179\") " Mar 7 01:21:40.474117 kubelet[3226]: I0307 01:21:40.473535 3226 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9aeeed72-eae0-49fc-86f0-39c207a04179-whisker-ca-bundle" pod "9aeeed72-eae0-49fc-86f0-39c207a04179" (UID: "9aeeed72-eae0-49fc-86f0-39c207a04179"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 01:21:40.474117 kubelet[3226]: I0307 01:21:40.473875 3226 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9aeeed72-eae0-49fc-86f0-39c207a04179-nginx-config" pod "9aeeed72-eae0-49fc-86f0-39c207a04179" (UID: "9aeeed72-eae0-49fc-86f0-39c207a04179"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 01:21:40.483299 kubelet[3226]: I0307 01:21:40.483256 3226 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9aeeed72-eae0-49fc-86f0-39c207a04179-kube-api-access-jjwtp" pod "9aeeed72-eae0-49fc-86f0-39c207a04179" (UID: "9aeeed72-eae0-49fc-86f0-39c207a04179"). InnerVolumeSpecName "kube-api-access-jjwtp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 01:21:40.483597 kubelet[3226]: I0307 01:21:40.483522 3226 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9aeeed72-eae0-49fc-86f0-39c207a04179-whisker-backend-key-pair" pod "9aeeed72-eae0-49fc-86f0-39c207a04179" (UID: "9aeeed72-eae0-49fc-86f0-39c207a04179"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 7 01:21:40.485951 systemd[1]: var-lib-kubelet-pods-9aeeed72\x2deae0\x2d49fc\x2d86f0\x2d39c207a04179-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djjwtp.mount: Deactivated successfully. Mar 7 01:21:40.486407 systemd[1]: var-lib-kubelet-pods-9aeeed72\x2deae0\x2d49fc\x2d86f0\x2d39c207a04179-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 7 01:21:40.518783 systemd[1]: Created slice kubepods-besteffort-pod7e1a3e97_9a69_4283_90ea_210216e07cc0.slice - libcontainer container kubepods-besteffort-pod7e1a3e97_9a69_4283_90ea_210216e07cc0.slice. Mar 7 01:21:40.545587 kubelet[3226]: I0307 01:21:40.544090 3226 scope.go:122] "RemoveContainer" containerID="9860089e58c3114bf34989ddbc42f23e78c7920f5e080d6cb64bd4eb51de593a" Mar 7 01:21:40.547964 containerd[1696]: time="2026-03-07T01:21:40.547921028Z" level=info msg="RemoveContainer for \"9860089e58c3114bf34989ddbc42f23e78c7920f5e080d6cb64bd4eb51de593a\"" Mar 7 01:21:40.558716 systemd[1]: Removed slice kubepods-besteffort-pod9aeeed72_eae0_49fc_86f0_39c207a04179.slice - libcontainer container kubepods-besteffort-pod9aeeed72_eae0_49fc_86f0_39c207a04179.slice. Mar 7 01:21:40.562734 containerd[1696]: time="2026-03-07T01:21:40.561440609Z" level=info msg="RemoveContainer for \"9860089e58c3114bf34989ddbc42f23e78c7920f5e080d6cb64bd4eb51de593a\" returns successfully" Mar 7 01:21:40.563416 kubelet[3226]: I0307 01:21:40.563343 3226 scope.go:122] "RemoveContainer" containerID="2c5e49635e6b54a604a00ace5264b4a5bf42dfa5082f3d293feef5b1c97cf35e" Mar 7 01:21:40.565779 containerd[1696]: time="2026-03-07T01:21:40.565427362Z" level=info msg="RemoveContainer for \"2c5e49635e6b54a604a00ace5264b4a5bf42dfa5082f3d293feef5b1c97cf35e\"" Mar 7 01:21:40.572673 kubelet[3226]: I0307 01:21:40.572625 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtdkx\" (UniqueName: \"kubernetes.io/projected/7e1a3e97-9a69-4283-90ea-210216e07cc0-kube-api-access-xtdkx\") pod \"whisker-6c47b979c4-6bdrb\" (UID: \"7e1a3e97-9a69-4283-90ea-210216e07cc0\") " pod="calico-system/whisker-6c47b979c4-6bdrb" Mar 7 01:21:40.573029 kubelet[3226]: I0307 01:21:40.573003 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/7e1a3e97-9a69-4283-90ea-210216e07cc0-nginx-config\") pod \"whisker-6c47b979c4-6bdrb\" (UID: \"7e1a3e97-9a69-4283-90ea-210216e07cc0\") " pod="calico-system/whisker-6c47b979c4-6bdrb" Mar 7 01:21:40.573259 kubelet[3226]: I0307 01:21:40.573172 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7e1a3e97-9a69-4283-90ea-210216e07cc0-whisker-backend-key-pair\") pod \"whisker-6c47b979c4-6bdrb\" (UID: \"7e1a3e97-9a69-4283-90ea-210216e07cc0\") " pod="calico-system/whisker-6c47b979c4-6bdrb" Mar 7 01:21:40.573259 kubelet[3226]: I0307 01:21:40.573236 3226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e1a3e97-9a69-4283-90ea-210216e07cc0-whisker-ca-bundle\") pod \"whisker-6c47b979c4-6bdrb\" (UID: \"7e1a3e97-9a69-4283-90ea-210216e07cc0\") " pod="calico-system/whisker-6c47b979c4-6bdrb" Mar 7 01:21:40.573683 kubelet[3226]: I0307 01:21:40.573277 3226 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/9aeeed72-eae0-49fc-86f0-39c207a04179-nginx-config\") on node \"ci-4081.3.6-n-8271a56a8b\" DevicePath \"\"" Mar 7 01:21:40.573683 kubelet[3226]: I0307 01:21:40.573292 3226 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jjwtp\" (UniqueName: \"kubernetes.io/projected/9aeeed72-eae0-49fc-86f0-39c207a04179-kube-api-access-jjwtp\") on node \"ci-4081.3.6-n-8271a56a8b\" DevicePath \"\"" Mar 7 01:21:40.573683 kubelet[3226]: I0307 01:21:40.573304 3226 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9aeeed72-eae0-49fc-86f0-39c207a04179-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-8271a56a8b\" DevicePath \"\"" Mar 7 01:21:40.573683 kubelet[3226]: I0307 01:21:40.573317 3226 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aeeed72-eae0-49fc-86f0-39c207a04179-whisker-ca-bundle\") on node \"ci-4081.3.6-n-8271a56a8b\" DevicePath \"\"" Mar 7 01:21:40.578852 containerd[1696]: time="2026-03-07T01:21:40.578766841Z" level=info msg="RemoveContainer for \"2c5e49635e6b54a604a00ace5264b4a5bf42dfa5082f3d293feef5b1c97cf35e\" returns successfully" Mar 7 01:21:40.579454 kubelet[3226]: I0307 01:21:40.579227 3226 scope.go:122] "RemoveContainer" containerID="9860089e58c3114bf34989ddbc42f23e78c7920f5e080d6cb64bd4eb51de593a" Mar 7 01:21:40.580161 containerd[1696]: time="2026-03-07T01:21:40.579738254Z" level=error msg="ContainerStatus for \"9860089e58c3114bf34989ddbc42f23e78c7920f5e080d6cb64bd4eb51de593a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9860089e58c3114bf34989ddbc42f23e78c7920f5e080d6cb64bd4eb51de593a\": not found" Mar 7 01:21:40.580792 kubelet[3226]: E0307 01:21:40.580544 3226 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9860089e58c3114bf34989ddbc42f23e78c7920f5e080d6cb64bd4eb51de593a\": not found" containerID="9860089e58c3114bf34989ddbc42f23e78c7920f5e080d6cb64bd4eb51de593a" Mar 7 01:21:40.580792 kubelet[3226]: I0307 01:21:40.580578 3226 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9860089e58c3114bf34989ddbc42f23e78c7920f5e080d6cb64bd4eb51de593a"} err="failed to get container status \"9860089e58c3114bf34989ddbc42f23e78c7920f5e080d6cb64bd4eb51de593a\": rpc error: code = NotFound desc = an error occurred when try to find container \"9860089e58c3114bf34989ddbc42f23e78c7920f5e080d6cb64bd4eb51de593a\": not found" Mar 7 01:21:40.580792 kubelet[3226]: I0307 01:21:40.580622 3226 scope.go:122] "RemoveContainer" containerID="2c5e49635e6b54a604a00ace5264b4a5bf42dfa5082f3d293feef5b1c97cf35e" Mar 7 01:21:40.581755 containerd[1696]: time="2026-03-07T01:21:40.581440477Z" level=error msg="ContainerStatus for \"2c5e49635e6b54a604a00ace5264b4a5bf42dfa5082f3d293feef5b1c97cf35e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2c5e49635e6b54a604a00ace5264b4a5bf42dfa5082f3d293feef5b1c97cf35e\": not found" Mar 7 01:21:40.582105 kubelet[3226]: E0307 01:21:40.581997 3226 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2c5e49635e6b54a604a00ace5264b4a5bf42dfa5082f3d293feef5b1c97cf35e\": not found" containerID="2c5e49635e6b54a604a00ace5264b4a5bf42dfa5082f3d293feef5b1c97cf35e" Mar 7 01:21:40.582105 kubelet[3226]: I0307 01:21:40.582068 3226 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2c5e49635e6b54a604a00ace5264b4a5bf42dfa5082f3d293feef5b1c97cf35e"} err="failed to get container status \"2c5e49635e6b54a604a00ace5264b4a5bf42dfa5082f3d293feef5b1c97cf35e\": rpc error: code = NotFound desc = an error occurred when try to find container \"2c5e49635e6b54a604a00ace5264b4a5bf42dfa5082f3d293feef5b1c97cf35e\": not found" Mar 7 01:21:40.582656 kubelet[3226]: I0307 01:21:40.582405 3226 scope.go:122] "RemoveContainer" containerID="9860089e58c3114bf34989ddbc42f23e78c7920f5e080d6cb64bd4eb51de593a" Mar 7 01:21:40.583385 containerd[1696]: time="2026-03-07T01:21:40.583054899Z" level=error msg="ContainerStatus for \"9860089e58c3114bf34989ddbc42f23e78c7920f5e080d6cb64bd4eb51de593a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9860089e58c3114bf34989ddbc42f23e78c7920f5e080d6cb64bd4eb51de593a\": not found" Mar 7 01:21:40.583956 kubelet[3226]: I0307 01:21:40.583740 3226 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9860089e58c3114bf34989ddbc42f23e78c7920f5e080d6cb64bd4eb51de593a"} err="failed to get container status \"9860089e58c3114bf34989ddbc42f23e78c7920f5e080d6cb64bd4eb51de593a\": rpc error: code = NotFound desc = an error occurred when try to find container \"9860089e58c3114bf34989ddbc42f23e78c7920f5e080d6cb64bd4eb51de593a\": not found" Mar 7 01:21:40.583956 kubelet[3226]: I0307 01:21:40.583919 3226 scope.go:122] "RemoveContainer" containerID="2c5e49635e6b54a604a00ace5264b4a5bf42dfa5082f3d293feef5b1c97cf35e" Mar 7 01:21:40.585047 containerd[1696]: time="2026-03-07T01:21:40.584762521Z" level=error msg="ContainerStatus for \"2c5e49635e6b54a604a00ace5264b4a5bf42dfa5082f3d293feef5b1c97cf35e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2c5e49635e6b54a604a00ace5264b4a5bf42dfa5082f3d293feef5b1c97cf35e\": not found" Mar 7 01:21:40.585333 kubelet[3226]: I0307 01:21:40.585200 3226 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2c5e49635e6b54a604a00ace5264b4a5bf42dfa5082f3d293feef5b1c97cf35e"} err="failed to get container status \"2c5e49635e6b54a604a00ace5264b4a5bf42dfa5082f3d293feef5b1c97cf35e\": rpc error: code = NotFound desc = an error occurred when try to find container \"2c5e49635e6b54a604a00ace5264b4a5bf42dfa5082f3d293feef5b1c97cf35e\": not found" Mar 7 01:21:40.896287 containerd[1696]: time="2026-03-07T01:21:40.896230596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c47b979c4-6bdrb,Uid:7e1a3e97-9a69-4283-90ea-210216e07cc0,Namespace:calico-system,Attempt:0,}" Mar 7 01:21:41.128319 systemd-networkd[1329]: cali439f8da71f6: Link UP Mar 7 01:21:41.135906 systemd-networkd[1329]: cali439f8da71f6: Gained carrier Mar 7 01:21:41.169711 containerd[1696]: 2026-03-07 01:21:40.998 [INFO][5735] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--8271a56a8b-k8s-whisker--6c47b979c4--6bdrb-eth0 whisker-6c47b979c4- calico-system 7e1a3e97-9a69-4283-90ea-210216e07cc0 1056 0 2026-03-07 01:21:40 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6c47b979c4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-8271a56a8b whisker-6c47b979c4-6bdrb eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali439f8da71f6 [] [] }} ContainerID="22a178be856ca42b0127705f496d78794c55c89844fcb1433bd173d950d7f55a" Namespace="calico-system" Pod="whisker-6c47b979c4-6bdrb" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-whisker--6c47b979c4--6bdrb-" Mar 7 01:21:41.169711 containerd[1696]: 2026-03-07 01:21:40.998 [INFO][5735] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="22a178be856ca42b0127705f496d78794c55c89844fcb1433bd173d950d7f55a" Namespace="calico-system" Pod="whisker-6c47b979c4-6bdrb" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-whisker--6c47b979c4--6bdrb-eth0" Mar 7 01:21:41.169711 containerd[1696]: 2026-03-07 01:21:41.042 [INFO][5747] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="22a178be856ca42b0127705f496d78794c55c89844fcb1433bd173d950d7f55a" HandleID="k8s-pod-network.22a178be856ca42b0127705f496d78794c55c89844fcb1433bd173d950d7f55a" Workload="ci--4081.3.6--n--8271a56a8b-k8s-whisker--6c47b979c4--6bdrb-eth0" Mar 7 01:21:41.169711 containerd[1696]: 2026-03-07 01:21:41.057 [INFO][5747] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="22a178be856ca42b0127705f496d78794c55c89844fcb1433bd173d950d7f55a" HandleID="k8s-pod-network.22a178be856ca42b0127705f496d78794c55c89844fcb1433bd173d950d7f55a" Workload="ci--4081.3.6--n--8271a56a8b-k8s-whisker--6c47b979c4--6bdrb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fb7c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-8271a56a8b", "pod":"whisker-6c47b979c4-6bdrb", "timestamp":"2026-03-07 01:21:41.042830061 +0000 UTC"}, Hostname:"ci-4081.3.6-n-8271a56a8b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003871e0)} Mar 7 01:21:41.169711 containerd[1696]: 2026-03-07 01:21:41.058 [INFO][5747] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:21:41.169711 containerd[1696]: 2026-03-07 01:21:41.058 [INFO][5747] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:21:41.169711 containerd[1696]: 2026-03-07 01:21:41.058 [INFO][5747] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-8271a56a8b' Mar 7 01:21:41.169711 containerd[1696]: 2026-03-07 01:21:41.061 [INFO][5747] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.22a178be856ca42b0127705f496d78794c55c89844fcb1433bd173d950d7f55a" host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:41.169711 containerd[1696]: 2026-03-07 01:21:41.068 [INFO][5747] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:41.169711 containerd[1696]: 2026-03-07 01:21:41.074 [INFO][5747] ipam/ipam.go 526: Trying affinity for 192.168.45.64/26 host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:41.169711 containerd[1696]: 2026-03-07 01:21:41.077 [INFO][5747] ipam/ipam.go 160: Attempting to load block cidr=192.168.45.64/26 host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:41.169711 containerd[1696]: 2026-03-07 01:21:41.079 [INFO][5747] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.45.64/26 host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:41.169711 containerd[1696]: 2026-03-07 01:21:41.079 [INFO][5747] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.45.64/26 handle="k8s-pod-network.22a178be856ca42b0127705f496d78794c55c89844fcb1433bd173d950d7f55a" host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:41.169711 containerd[1696]: 2026-03-07 01:21:41.082 [INFO][5747] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.22a178be856ca42b0127705f496d78794c55c89844fcb1433bd173d950d7f55a Mar 7 01:21:41.169711 containerd[1696]: 2026-03-07 01:21:41.088 [INFO][5747] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.45.64/26 handle="k8s-pod-network.22a178be856ca42b0127705f496d78794c55c89844fcb1433bd173d950d7f55a" host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:41.169711 containerd[1696]: 2026-03-07 01:21:41.101 [INFO][5747] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.45.73/26] block=192.168.45.64/26 handle="k8s-pod-network.22a178be856ca42b0127705f496d78794c55c89844fcb1433bd173d950d7f55a" host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:41.169711 containerd[1696]: 2026-03-07 01:21:41.101 [INFO][5747] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.45.73/26] handle="k8s-pod-network.22a178be856ca42b0127705f496d78794c55c89844fcb1433bd173d950d7f55a" host="ci-4081.3.6-n-8271a56a8b" Mar 7 01:21:41.169711 containerd[1696]: 2026-03-07 01:21:41.102 [INFO][5747] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:21:41.169711 containerd[1696]: 2026-03-07 01:21:41.102 [INFO][5747] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.45.73/26] IPv6=[] ContainerID="22a178be856ca42b0127705f496d78794c55c89844fcb1433bd173d950d7f55a" HandleID="k8s-pod-network.22a178be856ca42b0127705f496d78794c55c89844fcb1433bd173d950d7f55a" Workload="ci--4081.3.6--n--8271a56a8b-k8s-whisker--6c47b979c4--6bdrb-eth0" Mar 7 01:21:41.173885 containerd[1696]: 2026-03-07 01:21:41.114 [INFO][5735] cni-plugin/k8s.go 418: Populated endpoint ContainerID="22a178be856ca42b0127705f496d78794c55c89844fcb1433bd173d950d7f55a" Namespace="calico-system" Pod="whisker-6c47b979c4-6bdrb" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-whisker--6c47b979c4--6bdrb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8271a56a8b-k8s-whisker--6c47b979c4--6bdrb-eth0", GenerateName:"whisker-6c47b979c4-", Namespace:"calico-system", SelfLink:"", UID:"7e1a3e97-9a69-4283-90ea-210216e07cc0", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 21, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c47b979c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8271a56a8b", ContainerID:"", Pod:"whisker-6c47b979c4-6bdrb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.45.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali439f8da71f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:21:41.173885 containerd[1696]: 2026-03-07 01:21:41.114 [INFO][5735] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.73/32] ContainerID="22a178be856ca42b0127705f496d78794c55c89844fcb1433bd173d950d7f55a" Namespace="calico-system" Pod="whisker-6c47b979c4-6bdrb" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-whisker--6c47b979c4--6bdrb-eth0" Mar 7 01:21:41.173885 containerd[1696]: 2026-03-07 01:21:41.114 [INFO][5735] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali439f8da71f6 ContainerID="22a178be856ca42b0127705f496d78794c55c89844fcb1433bd173d950d7f55a" Namespace="calico-system" Pod="whisker-6c47b979c4-6bdrb" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-whisker--6c47b979c4--6bdrb-eth0" Mar 7 01:21:41.173885 containerd[1696]: 2026-03-07 01:21:41.135 [INFO][5735] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="22a178be856ca42b0127705f496d78794c55c89844fcb1433bd173d950d7f55a" Namespace="calico-system" Pod="whisker-6c47b979c4-6bdrb" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-whisker--6c47b979c4--6bdrb-eth0" Mar 7 01:21:41.173885 containerd[1696]: 2026-03-07 01:21:41.145 [INFO][5735] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="22a178be856ca42b0127705f496d78794c55c89844fcb1433bd173d950d7f55a" Namespace="calico-system" Pod="whisker-6c47b979c4-6bdrb" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-whisker--6c47b979c4--6bdrb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8271a56a8b-k8s-whisker--6c47b979c4--6bdrb-eth0", GenerateName:"whisker-6c47b979c4-", Namespace:"calico-system", SelfLink:"", UID:"7e1a3e97-9a69-4283-90ea-210216e07cc0", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 21, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c47b979c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8271a56a8b", ContainerID:"22a178be856ca42b0127705f496d78794c55c89844fcb1433bd173d950d7f55a", Pod:"whisker-6c47b979c4-6bdrb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.45.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali439f8da71f6", MAC:"0a:3a:bf:34:2a:0c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:21:41.173885 containerd[1696]: 2026-03-07 01:21:41.162 [INFO][5735] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="22a178be856ca42b0127705f496d78794c55c89844fcb1433bd173d950d7f55a" Namespace="calico-system" Pod="whisker-6c47b979c4-6bdrb" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-whisker--6c47b979c4--6bdrb-eth0" Mar 7 01:21:41.213261 kubelet[3226]: I0307 01:21:41.212632 3226 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="9aeeed72-eae0-49fc-86f0-39c207a04179" path="/var/lib/kubelet/pods/9aeeed72-eae0-49fc-86f0-39c207a04179/volumes" Mar 7 01:21:41.231775 containerd[1696]: time="2026-03-07T01:21:41.230267074Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:21:41.235028 containerd[1696]: time="2026-03-07T01:21:41.234341428Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:21:41.235028 containerd[1696]: time="2026-03-07T01:21:41.234371829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:21:41.235028 containerd[1696]: time="2026-03-07T01:21:41.234487130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:21:41.277293 systemd[1]: Started cri-containerd-22a178be856ca42b0127705f496d78794c55c89844fcb1433bd173d950d7f55a.scope - libcontainer container 22a178be856ca42b0127705f496d78794c55c89844fcb1433bd173d950d7f55a. Mar 7 01:21:41.375916 containerd[1696]: time="2026-03-07T01:21:41.375864025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c47b979c4-6bdrb,Uid:7e1a3e97-9a69-4283-90ea-210216e07cc0,Namespace:calico-system,Attempt:0,} returns sandbox id \"22a178be856ca42b0127705f496d78794c55c89844fcb1433bd173d950d7f55a\"" Mar 7 01:21:41.387580 containerd[1696]: time="2026-03-07T01:21:41.387538782Z" level=info msg="CreateContainer within sandbox \"22a178be856ca42b0127705f496d78794c55c89844fcb1433bd173d950d7f55a\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 7 01:21:41.430518 containerd[1696]: time="2026-03-07T01:21:41.430366756Z" level=info msg="CreateContainer within sandbox \"22a178be856ca42b0127705f496d78794c55c89844fcb1433bd173d950d7f55a\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"8011bce90a769c94ce3a55e44673b9e81f8fe814539096d17b40e5a91a917f5b\"" Mar 7 01:21:41.432471 containerd[1696]: time="2026-03-07T01:21:41.432080079Z" level=info msg="StartContainer for \"8011bce90a769c94ce3a55e44673b9e81f8fe814539096d17b40e5a91a917f5b\"" Mar 7 01:21:41.447799 containerd[1696]: time="2026-03-07T01:21:41.447418285Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:21:41.460574 containerd[1696]: time="2026-03-07T01:21:41.460508960Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 7 01:21:41.463477 containerd[1696]: time="2026-03-07T01:21:41.463436499Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:21:41.464471 systemd[1]: Started cri-containerd-8011bce90a769c94ce3a55e44673b9e81f8fe814539096d17b40e5a91a917f5b.scope - libcontainer container 8011bce90a769c94ce3a55e44673b9e81f8fe814539096d17b40e5a91a917f5b. Mar 7 01:21:41.469753 containerd[1696]: time="2026-03-07T01:21:41.469711483Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:21:41.470941 containerd[1696]: time="2026-03-07T01:21:41.470868699Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 5.422406956s" Mar 7 01:21:41.470941 containerd[1696]: time="2026-03-07T01:21:41.470904799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 7 01:21:41.488252 containerd[1696]: time="2026-03-07T01:21:41.488192731Z" level=info msg="CreateContainer within sandbox \"46dc43e88af24ffdddefe59ae8f8ee529076997df6bf94a11133492380e66133\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 7 01:21:41.528052 containerd[1696]: time="2026-03-07T01:21:41.527666860Z" level=info msg="StartContainer for \"8011bce90a769c94ce3a55e44673b9e81f8fe814539096d17b40e5a91a917f5b\" returns successfully" Mar 7 01:21:41.536023 containerd[1696]: time="2026-03-07T01:21:41.535424464Z" level=info msg="CreateContainer within sandbox \"46dc43e88af24ffdddefe59ae8f8ee529076997df6bf94a11133492380e66133\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"091a58b3819d855d386371f5711ab778eec2a671fdacf391a59e1edcb432e9b6\"" Mar 7 01:21:41.538766 containerd[1696]: time="2026-03-07T01:21:41.536565180Z" level=info msg="StartContainer for \"091a58b3819d855d386371f5711ab778eec2a671fdacf391a59e1edcb432e9b6\"" Mar 7 01:21:41.543794 containerd[1696]: time="2026-03-07T01:21:41.543742676Z" level=info msg="CreateContainer within sandbox \"22a178be856ca42b0127705f496d78794c55c89844fcb1433bd173d950d7f55a\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 7 01:21:41.575399 systemd[1]: Started cri-containerd-091a58b3819d855d386371f5711ab778eec2a671fdacf391a59e1edcb432e9b6.scope - libcontainer container 091a58b3819d855d386371f5711ab778eec2a671fdacf391a59e1edcb432e9b6. Mar 7 01:21:41.587472 containerd[1696]: time="2026-03-07T01:21:41.585445635Z" level=info msg="CreateContainer within sandbox \"22a178be856ca42b0127705f496d78794c55c89844fcb1433bd173d950d7f55a\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"b830c7452fc56ff3f725361a44c6ab5cef8a234c2c7845023e627741119fc3ad\"" Mar 7 01:21:41.587472 containerd[1696]: time="2026-03-07T01:21:41.586155744Z" level=info msg="StartContainer for \"b830c7452fc56ff3f725361a44c6ab5cef8a234c2c7845023e627741119fc3ad\"" Mar 7 01:21:41.623911 containerd[1696]: time="2026-03-07T01:21:41.623858650Z" level=info msg="StartContainer for \"091a58b3819d855d386371f5711ab778eec2a671fdacf391a59e1edcb432e9b6\" returns successfully" Mar 7 01:21:41.626474 systemd[1]: Started cri-containerd-b830c7452fc56ff3f725361a44c6ab5cef8a234c2c7845023e627741119fc3ad.scope - libcontainer container b830c7452fc56ff3f725361a44c6ab5cef8a234c2c7845023e627741119fc3ad. Mar 7 01:21:41.680733 containerd[1696]: time="2026-03-07T01:21:41.680607510Z" level=info msg="StartContainer for \"b830c7452fc56ff3f725361a44c6ab5cef8a234c2c7845023e627741119fc3ad\" returns successfully" Mar 7 01:21:42.292209 kubelet[3226]: I0307 01:21:42.292175 3226 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 7 01:21:42.292209 kubelet[3226]: I0307 01:21:42.292216 3226 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 7 01:21:42.536414 systemd-networkd[1329]: cali439f8da71f6: Gained IPv6LL Mar 7 01:21:42.598391 kubelet[3226]: I0307 01:21:42.598266 3226 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-6c47b979c4-6bdrb" podStartSLOduration=2.59824931 podStartE2EDuration="2.59824931s" podCreationTimestamp="2026-03-07 01:21:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:21:42.597229497 +0000 UTC m=+65.525481859" watchObservedRunningTime="2026-03-07 01:21:42.59824931 +0000 UTC m=+65.526501772" Mar 7 01:21:50.005428 systemd[1]: Started sshd@7-10.200.8.18:22-10.200.16.10:58062.service - OpenSSH per-connection server daemon (10.200.16.10:58062). Mar 7 01:21:50.470858 systemd[1]: run-containerd-runc-k8s.io-dd8caea8b37c919a9957e40dadfe1e9f1cd3565a693def6eb3c019ef00e4aabb-runc.pfnznv.mount: Deactivated successfully. Mar 7 01:21:50.540838 kubelet[3226]: I0307 01:21:50.540754 3226 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-l6gbz" podStartSLOduration=38.029673508 podStartE2EDuration="58.540736522s" podCreationTimestamp="2026-03-07 01:20:52 +0000 UTC" firstStartedPulling="2026-03-07 01:21:20.96239042 +0000 UTC m=+43.890642782" lastFinishedPulling="2026-03-07 01:21:41.473453334 +0000 UTC m=+64.401705796" observedRunningTime="2026-03-07 01:21:42.623723452 +0000 UTC m=+65.551975914" watchObservedRunningTime="2026-03-07 01:21:50.540736522 +0000 UTC m=+73.468988984" Mar 7 01:21:50.625920 sshd[5965]: Accepted publickey for core from 10.200.16.10 port 58062 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:21:50.627721 sshd[5965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:21:50.633750 systemd-logind[1672]: New session 10 of user core. Mar 7 01:21:50.639279 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 7 01:21:51.145270 sshd[5965]: pam_unix(sshd:session): session closed for user core Mar 7 01:21:51.150188 systemd[1]: sshd@7-10.200.8.18:22-10.200.16.10:58062.service: Deactivated successfully. Mar 7 01:21:51.152482 systemd[1]: session-10.scope: Deactivated successfully. Mar 7 01:21:51.154057 systemd-logind[1672]: Session 10 logged out. Waiting for processes to exit. Mar 7 01:21:51.155653 systemd-logind[1672]: Removed session 10. Mar 7 01:21:56.259440 systemd[1]: Started sshd@8-10.200.8.18:22-10.200.16.10:58078.service - OpenSSH per-connection server daemon (10.200.16.10:58078). Mar 7 01:21:56.880990 sshd[6011]: Accepted publickey for core from 10.200.16.10 port 58078 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:21:56.882662 sshd[6011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:21:56.888081 systemd-logind[1672]: New session 11 of user core. Mar 7 01:21:56.892275 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 7 01:21:57.395400 sshd[6011]: pam_unix(sshd:session): session closed for user core Mar 7 01:21:57.399449 systemd[1]: sshd@8-10.200.8.18:22-10.200.16.10:58078.service: Deactivated successfully. Mar 7 01:21:57.401992 systemd[1]: session-11.scope: Deactivated successfully. Mar 7 01:21:57.402873 systemd-logind[1672]: Session 11 logged out. Waiting for processes to exit. Mar 7 01:21:57.403991 systemd-logind[1672]: Removed session 11. Mar 7 01:21:59.518027 systemd[1]: run-containerd-runc-k8s.io-dad8b507375119cfe4147ac92d1df2205bc8316a5784ec246059e9ea9a0d99bf-runc.qVfQyQ.mount: Deactivated successfully. Mar 7 01:22:02.517341 systemd[1]: Started sshd@9-10.200.8.18:22-10.200.16.10:41352.service - OpenSSH per-connection server daemon (10.200.16.10:41352). Mar 7 01:22:03.138010 sshd[6070]: Accepted publickey for core from 10.200.16.10 port 41352 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:22:03.139665 sshd[6070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:22:03.144170 systemd-logind[1672]: New session 12 of user core. Mar 7 01:22:03.150444 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 7 01:22:03.554781 systemd[1]: run-containerd-runc-k8s.io-e82b4e7e7b4db366cbe15dac873ace56537e54a83d03b9da9a20b504e7034255-runc.F9iI94.mount: Deactivated successfully. Mar 7 01:22:03.673883 sshd[6070]: pam_unix(sshd:session): session closed for user core Mar 7 01:22:03.677677 systemd-logind[1672]: Session 12 logged out. Waiting for processes to exit. Mar 7 01:22:03.678606 systemd[1]: sshd@9-10.200.8.18:22-10.200.16.10:41352.service: Deactivated successfully. Mar 7 01:22:03.680916 systemd[1]: session-12.scope: Deactivated successfully. Mar 7 01:22:03.682289 systemd-logind[1672]: Removed session 12. Mar 7 01:22:07.538315 kubelet[3226]: I0307 01:22:07.537750 3226 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:22:08.785754 systemd[1]: Started sshd@10-10.200.8.18:22-10.200.16.10:41354.service - OpenSSH per-connection server daemon (10.200.16.10:41354). Mar 7 01:22:09.441873 sshd[6119]: Accepted publickey for core from 10.200.16.10 port 41354 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:22:09.443666 sshd[6119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:22:09.448198 systemd-logind[1672]: New session 13 of user core. Mar 7 01:22:09.455287 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 7 01:22:09.945351 sshd[6119]: pam_unix(sshd:session): session closed for user core Mar 7 01:22:09.949184 systemd-logind[1672]: Session 13 logged out. Waiting for processes to exit. Mar 7 01:22:09.950054 systemd[1]: sshd@10-10.200.8.18:22-10.200.16.10:41354.service: Deactivated successfully. Mar 7 01:22:09.952706 systemd[1]: session-13.scope: Deactivated successfully. Mar 7 01:22:09.953740 systemd-logind[1672]: Removed session 13. Mar 7 01:22:15.060440 systemd[1]: Started sshd@11-10.200.8.18:22-10.200.16.10:42296.service - OpenSSH per-connection server daemon (10.200.16.10:42296). Mar 7 01:22:15.682136 sshd[6135]: Accepted publickey for core from 10.200.16.10 port 42296 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:22:15.683198 sshd[6135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:22:15.688962 systemd-logind[1672]: New session 14 of user core. Mar 7 01:22:15.693298 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 7 01:22:16.184260 sshd[6135]: pam_unix(sshd:session): session closed for user core Mar 7 01:22:16.188405 systemd[1]: sshd@11-10.200.8.18:22-10.200.16.10:42296.service: Deactivated successfully. Mar 7 01:22:16.191062 systemd[1]: session-14.scope: Deactivated successfully. Mar 7 01:22:16.191894 systemd-logind[1672]: Session 14 logged out. Waiting for processes to exit. Mar 7 01:22:16.192993 systemd-logind[1672]: Removed session 14. Mar 7 01:22:16.300139 systemd[1]: Started sshd@12-10.200.8.18:22-10.200.16.10:42306.service - OpenSSH per-connection server daemon (10.200.16.10:42306). Mar 7 01:22:16.919589 sshd[6149]: Accepted publickey for core from 10.200.16.10 port 42306 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:22:16.923384 sshd[6149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:22:16.929361 systemd-logind[1672]: New session 15 of user core. Mar 7 01:22:16.936302 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 7 01:22:17.463669 sshd[6149]: pam_unix(sshd:session): session closed for user core Mar 7 01:22:17.467768 systemd[1]: sshd@12-10.200.8.18:22-10.200.16.10:42306.service: Deactivated successfully. Mar 7 01:22:17.470312 systemd[1]: session-15.scope: Deactivated successfully. Mar 7 01:22:17.471261 systemd-logind[1672]: Session 15 logged out. Waiting for processes to exit. Mar 7 01:22:17.472298 systemd-logind[1672]: Removed session 15. Mar 7 01:22:17.584767 systemd[1]: Started sshd@13-10.200.8.18:22-10.200.16.10:42316.service - OpenSSH per-connection server daemon (10.200.16.10:42316). Mar 7 01:22:18.208170 sshd[6175]: Accepted publickey for core from 10.200.16.10 port 42316 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:22:18.209809 sshd[6175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:22:18.214844 systemd-logind[1672]: New session 16 of user core. Mar 7 01:22:18.220293 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 7 01:22:18.709887 sshd[6175]: pam_unix(sshd:session): session closed for user core Mar 7 01:22:18.715058 systemd[1]: sshd@13-10.200.8.18:22-10.200.16.10:42316.service: Deactivated successfully. Mar 7 01:22:18.718370 systemd[1]: session-16.scope: Deactivated successfully. Mar 7 01:22:18.719233 systemd-logind[1672]: Session 16 logged out. Waiting for processes to exit. Mar 7 01:22:18.720307 systemd-logind[1672]: Removed session 16. Mar 7 01:22:23.825444 systemd[1]: Started sshd@14-10.200.8.18:22-10.200.16.10:37088.service - OpenSSH per-connection server daemon (10.200.16.10:37088). Mar 7 01:22:24.454392 sshd[6217]: Accepted publickey for core from 10.200.16.10 port 37088 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:22:24.455944 sshd[6217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:22:24.461463 systemd-logind[1672]: New session 17 of user core. Mar 7 01:22:24.469298 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 7 01:22:24.961304 sshd[6217]: pam_unix(sshd:session): session closed for user core Mar 7 01:22:24.965450 systemd[1]: sshd@14-10.200.8.18:22-10.200.16.10:37088.service: Deactivated successfully. Mar 7 01:22:24.967769 systemd[1]: session-17.scope: Deactivated successfully. Mar 7 01:22:24.968600 systemd-logind[1672]: Session 17 logged out. Waiting for processes to exit. Mar 7 01:22:24.969647 systemd-logind[1672]: Removed session 17. Mar 7 01:22:25.077434 systemd[1]: Started sshd@15-10.200.8.18:22-10.200.16.10:37098.service - OpenSSH per-connection server daemon (10.200.16.10:37098). Mar 7 01:22:25.697307 sshd[6230]: Accepted publickey for core from 10.200.16.10 port 37098 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:22:25.698846 sshd[6230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:22:25.703838 systemd-logind[1672]: New session 18 of user core. Mar 7 01:22:25.708294 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 7 01:22:26.260417 sshd[6230]: pam_unix(sshd:session): session closed for user core Mar 7 01:22:26.264300 systemd-logind[1672]: Session 18 logged out. Waiting for processes to exit. Mar 7 01:22:26.265240 systemd[1]: sshd@15-10.200.8.18:22-10.200.16.10:37098.service: Deactivated successfully. Mar 7 01:22:26.267991 systemd[1]: session-18.scope: Deactivated successfully. Mar 7 01:22:26.269333 systemd-logind[1672]: Removed session 18. Mar 7 01:22:26.377466 systemd[1]: Started sshd@16-10.200.8.18:22-10.200.16.10:37110.service - OpenSSH per-connection server daemon (10.200.16.10:37110). Mar 7 01:22:27.006135 sshd[6240]: Accepted publickey for core from 10.200.16.10 port 37110 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:22:27.007438 sshd[6240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:22:27.011501 systemd-logind[1672]: New session 19 of user core. Mar 7 01:22:27.019279 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 7 01:22:28.087909 sshd[6240]: pam_unix(sshd:session): session closed for user core Mar 7 01:22:28.091971 systemd-logind[1672]: Session 19 logged out. Waiting for processes to exit. Mar 7 01:22:28.092846 systemd[1]: sshd@16-10.200.8.18:22-10.200.16.10:37110.service: Deactivated successfully. Mar 7 01:22:28.095064 systemd[1]: session-19.scope: Deactivated successfully. Mar 7 01:22:28.096697 systemd-logind[1672]: Removed session 19. Mar 7 01:22:28.203433 systemd[1]: Started sshd@17-10.200.8.18:22-10.200.16.10:37114.service - OpenSSH per-connection server daemon (10.200.16.10:37114). Mar 7 01:22:28.824362 sshd[6264]: Accepted publickey for core from 10.200.16.10 port 37114 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:22:28.826723 sshd[6264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:22:28.833178 systemd-logind[1672]: New session 20 of user core. Mar 7 01:22:28.841286 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 7 01:22:29.481276 sshd[6264]: pam_unix(sshd:session): session closed for user core Mar 7 01:22:29.485499 systemd-logind[1672]: Session 20 logged out. Waiting for processes to exit. Mar 7 01:22:29.486341 systemd[1]: sshd@17-10.200.8.18:22-10.200.16.10:37114.service: Deactivated successfully. Mar 7 01:22:29.490977 systemd[1]: session-20.scope: Deactivated successfully. Mar 7 01:22:29.493501 systemd-logind[1672]: Removed session 20. Mar 7 01:22:29.599446 systemd[1]: Started sshd@18-10.200.8.18:22-10.200.16.10:37122.service - OpenSSH per-connection server daemon (10.200.16.10:37122). Mar 7 01:22:29.912443 systemd[1]: run-containerd-runc-k8s.io-dad8b507375119cfe4147ac92d1df2205bc8316a5784ec246059e9ea9a0d99bf-runc.KrtuzF.mount: Deactivated successfully. Mar 7 01:22:30.231288 sshd[6295]: Accepted publickey for core from 10.200.16.10 port 37122 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:22:30.232569 sshd[6295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:22:30.237752 systemd-logind[1672]: New session 21 of user core. Mar 7 01:22:30.242267 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 7 01:22:30.733600 sshd[6295]: pam_unix(sshd:session): session closed for user core Mar 7 01:22:30.737732 systemd-logind[1672]: Session 21 logged out. Waiting for processes to exit. Mar 7 01:22:30.738554 systemd[1]: sshd@18-10.200.8.18:22-10.200.16.10:37122.service: Deactivated successfully. Mar 7 01:22:30.741698 systemd[1]: session-21.scope: Deactivated successfully. Mar 7 01:22:30.742721 systemd-logind[1672]: Removed session 21. Mar 7 01:22:35.849427 systemd[1]: Started sshd@19-10.200.8.18:22-10.200.16.10:58144.service - OpenSSH per-connection server daemon (10.200.16.10:58144). Mar 7 01:22:36.471396 sshd[6350]: Accepted publickey for core from 10.200.16.10 port 58144 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:22:36.472977 sshd[6350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:22:36.478193 systemd-logind[1672]: New session 22 of user core. Mar 7 01:22:36.486279 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 7 01:22:36.972562 sshd[6350]: pam_unix(sshd:session): session closed for user core Mar 7 01:22:36.976427 systemd[1]: sshd@19-10.200.8.18:22-10.200.16.10:58144.service: Deactivated successfully. Mar 7 01:22:36.978675 systemd[1]: session-22.scope: Deactivated successfully. Mar 7 01:22:36.979581 systemd-logind[1672]: Session 22 logged out. Waiting for processes to exit. Mar 7 01:22:36.980679 systemd-logind[1672]: Removed session 22. Mar 7 01:22:40.049145 containerd[1696]: time="2026-03-07T01:22:40.049069492Z" level=info msg="StopPodSandbox for \"86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e\"" Mar 7 01:22:40.116167 containerd[1696]: 2026-03-07 01:22:40.083 [WARNING][6372] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-whisker--65d4c8b8c7--bfgrz-eth0" Mar 7 01:22:40.116167 containerd[1696]: 2026-03-07 01:22:40.083 [INFO][6372] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" Mar 7 01:22:40.116167 containerd[1696]: 2026-03-07 01:22:40.083 [INFO][6372] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" iface="eth0" netns="" Mar 7 01:22:40.116167 containerd[1696]: 2026-03-07 01:22:40.083 [INFO][6372] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" Mar 7 01:22:40.116167 containerd[1696]: 2026-03-07 01:22:40.083 [INFO][6372] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" Mar 7 01:22:40.116167 containerd[1696]: 2026-03-07 01:22:40.105 [INFO][6379] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" HandleID="k8s-pod-network.86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" Workload="ci--4081.3.6--n--8271a56a8b-k8s-whisker--65d4c8b8c7--bfgrz-eth0" Mar 7 01:22:40.116167 containerd[1696]: 2026-03-07 01:22:40.105 [INFO][6379] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:22:40.116167 containerd[1696]: 2026-03-07 01:22:40.106 [INFO][6379] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:22:40.116167 containerd[1696]: 2026-03-07 01:22:40.112 [WARNING][6379] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" HandleID="k8s-pod-network.86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" Workload="ci--4081.3.6--n--8271a56a8b-k8s-whisker--65d4c8b8c7--bfgrz-eth0" Mar 7 01:22:40.116167 containerd[1696]: 2026-03-07 01:22:40.112 [INFO][6379] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" HandleID="k8s-pod-network.86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" Workload="ci--4081.3.6--n--8271a56a8b-k8s-whisker--65d4c8b8c7--bfgrz-eth0" Mar 7 01:22:40.116167 containerd[1696]: 2026-03-07 01:22:40.113 [INFO][6379] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:22:40.116167 containerd[1696]: 2026-03-07 01:22:40.115 [INFO][6372] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" Mar 7 01:22:40.116751 containerd[1696]: time="2026-03-07T01:22:40.116221016Z" level=info msg="TearDown network for sandbox \"86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e\" successfully" Mar 7 01:22:40.116751 containerd[1696]: time="2026-03-07T01:22:40.116253216Z" level=info msg="StopPodSandbox for \"86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e\" returns successfully" Mar 7 01:22:40.116751 containerd[1696]: time="2026-03-07T01:22:40.116710523Z" level=info msg="RemovePodSandbox for \"86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e\"" Mar 7 01:22:40.116751 containerd[1696]: time="2026-03-07T01:22:40.116747323Z" level=info msg="Forcibly stopping sandbox \"86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e\"" Mar 7 01:22:40.188542 containerd[1696]: 2026-03-07 01:22:40.151 [WARNING][6393] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" WorkloadEndpoint="ci--4081.3.6--n--8271a56a8b-k8s-whisker--65d4c8b8c7--bfgrz-eth0" Mar 7 01:22:40.188542 containerd[1696]: 2026-03-07 01:22:40.151 [INFO][6393] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" Mar 7 01:22:40.188542 containerd[1696]: 2026-03-07 01:22:40.151 [INFO][6393] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" iface="eth0" netns="" Mar 7 01:22:40.188542 containerd[1696]: 2026-03-07 01:22:40.151 [INFO][6393] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" Mar 7 01:22:40.188542 containerd[1696]: 2026-03-07 01:22:40.151 [INFO][6393] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" Mar 7 01:22:40.188542 containerd[1696]: 2026-03-07 01:22:40.175 [INFO][6400] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" HandleID="k8s-pod-network.86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" Workload="ci--4081.3.6--n--8271a56a8b-k8s-whisker--65d4c8b8c7--bfgrz-eth0" Mar 7 01:22:40.188542 containerd[1696]: 2026-03-07 01:22:40.175 [INFO][6400] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:22:40.188542 containerd[1696]: 2026-03-07 01:22:40.175 [INFO][6400] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:22:40.188542 containerd[1696]: 2026-03-07 01:22:40.183 [WARNING][6400] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" HandleID="k8s-pod-network.86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" Workload="ci--4081.3.6--n--8271a56a8b-k8s-whisker--65d4c8b8c7--bfgrz-eth0" Mar 7 01:22:40.188542 containerd[1696]: 2026-03-07 01:22:40.183 [INFO][6400] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" HandleID="k8s-pod-network.86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" Workload="ci--4081.3.6--n--8271a56a8b-k8s-whisker--65d4c8b8c7--bfgrz-eth0" Mar 7 01:22:40.188542 containerd[1696]: 2026-03-07 01:22:40.184 [INFO][6400] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:22:40.188542 containerd[1696]: 2026-03-07 01:22:40.185 [INFO][6393] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e" Mar 7 01:22:40.188542 containerd[1696]: time="2026-03-07T01:22:40.186956389Z" level=info msg="TearDown network for sandbox \"86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e\" successfully" Mar 7 01:22:40.199122 containerd[1696]: time="2026-03-07T01:22:40.199057255Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:22:40.199319 containerd[1696]: time="2026-03-07T01:22:40.199162957Z" level=info msg="RemovePodSandbox \"86db48d8228c3a11621a1afdde945ee4d1dba5d7a53dd63b3f80adad93d9bf8e\" returns successfully" Mar 7 01:22:42.090861 systemd[1]: Started sshd@20-10.200.8.18:22-10.200.16.10:35750.service - OpenSSH per-connection server daemon (10.200.16.10:35750). Mar 7 01:22:42.715147 sshd[6409]: Accepted publickey for core from 10.200.16.10 port 35750 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:22:42.716293 sshd[6409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:22:42.721051 systemd-logind[1672]: New session 23 of user core. Mar 7 01:22:42.724283 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 7 01:22:43.234958 sshd[6409]: pam_unix(sshd:session): session closed for user core Mar 7 01:22:43.238821 systemd-logind[1672]: Session 23 logged out. Waiting for processes to exit. Mar 7 01:22:43.239656 systemd[1]: sshd@20-10.200.8.18:22-10.200.16.10:35750.service: Deactivated successfully. Mar 7 01:22:43.241929 systemd[1]: session-23.scope: Deactivated successfully. Mar 7 01:22:43.243266 systemd-logind[1672]: Removed session 23. Mar 7 01:22:48.352429 systemd[1]: Started sshd@21-10.200.8.18:22-10.200.16.10:35754.service - OpenSSH per-connection server daemon (10.200.16.10:35754). Mar 7 01:22:48.974849 sshd[6432]: Accepted publickey for core from 10.200.16.10 port 35754 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:22:48.975864 sshd[6432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:22:48.980781 systemd-logind[1672]: New session 24 of user core. Mar 7 01:22:48.983295 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 7 01:22:49.478377 sshd[6432]: pam_unix(sshd:session): session closed for user core Mar 7 01:22:49.481704 systemd[1]: sshd@21-10.200.8.18:22-10.200.16.10:35754.service: Deactivated successfully. Mar 7 01:22:49.484249 systemd[1]: session-24.scope: Deactivated successfully. Mar 7 01:22:49.486588 systemd-logind[1672]: Session 24 logged out. Waiting for processes to exit. Mar 7 01:22:49.487796 systemd-logind[1672]: Removed session 24. Mar 7 01:22:50.464323 systemd[1]: run-containerd-runc-k8s.io-dd8caea8b37c919a9957e40dadfe1e9f1cd3565a693def6eb3c019ef00e4aabb-runc.MKbnvC.mount: Deactivated successfully. Mar 7 01:22:54.603475 systemd[1]: Started sshd@22-10.200.8.18:22-10.200.16.10:41672.service - OpenSSH per-connection server daemon (10.200.16.10:41672). Mar 7 01:22:55.245584 sshd[6467]: Accepted publickey for core from 10.200.16.10 port 41672 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:22:55.247247 sshd[6467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:22:55.252197 systemd-logind[1672]: New session 25 of user core. Mar 7 01:22:55.255287 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 7 01:22:55.752641 sshd[6467]: pam_unix(sshd:session): session closed for user core Mar 7 01:22:55.756561 systemd[1]: sshd@22-10.200.8.18:22-10.200.16.10:41672.service: Deactivated successfully. Mar 7 01:22:55.758797 systemd[1]: session-25.scope: Deactivated successfully. Mar 7 01:22:55.759701 systemd-logind[1672]: Session 25 logged out. Waiting for processes to exit. Mar 7 01:22:55.761177 systemd-logind[1672]: Removed session 25. Mar 7 01:23:00.868423 systemd[1]: Started sshd@23-10.200.8.18:22-10.200.16.10:43874.service - OpenSSH per-connection server daemon (10.200.16.10:43874). Mar 7 01:23:01.491694 sshd[6534]: Accepted publickey for core from 10.200.16.10 port 43874 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:23:01.493225 sshd[6534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:23:01.497596 systemd-logind[1672]: New session 26 of user core. Mar 7 01:23:01.502286 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 7 01:23:01.993976 sshd[6534]: pam_unix(sshd:session): session closed for user core Mar 7 01:23:01.998198 systemd[1]: sshd@23-10.200.8.18:22-10.200.16.10:43874.service: Deactivated successfully. Mar 7 01:23:02.000649 systemd[1]: session-26.scope: Deactivated successfully. Mar 7 01:23:02.001514 systemd-logind[1672]: Session 26 logged out. Waiting for processes to exit. Mar 7 01:23:02.002527 systemd-logind[1672]: Removed session 26. Mar 7 01:23:03.542344 systemd[1]: run-containerd-runc-k8s.io-e82b4e7e7b4db366cbe15dac873ace56537e54a83d03b9da9a20b504e7034255-runc.22tAgQ.mount: Deactivated successfully.