Mar 7 01:14:31.110401 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 6 22:58:19 -00 2026 Mar 7 01:14:31.110428 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:14:31.110443 kernel: BIOS-provided physical RAM map: Mar 7 01:14:31.110450 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 7 01:14:31.110456 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Mar 7 01:14:31.110464 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000000437dfff] usable Mar 7 01:14:31.110473 kernel: BIOS-e820: [mem 0x000000000437e000-0x000000000477dfff] reserved Mar 7 01:14:31.110480 kernel: BIOS-e820: [mem 0x000000000477e000-0x000000003ff1efff] usable Mar 7 01:14:31.110488 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ff73fff] type 20 Mar 7 01:14:31.110499 kernel: BIOS-e820: [mem 0x000000003ff74000-0x000000003ffc8fff] reserved Mar 7 01:14:31.110506 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Mar 7 01:14:31.110512 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Mar 7 01:14:31.110523 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Mar 7 01:14:31.110529 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Mar 7 01:14:31.110543 kernel: printk: bootconsole [earlyser0] enabled Mar 7 01:14:31.110569 kernel: NX (Execute Disable) protection: active Mar 7 01:14:31.110576 kernel: APIC: Static calls initialized Mar 7 01:14:31.110583 kernel: efi: EFI v2.7 by Microsoft Mar 7 01:14:31.110595 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3f421418 Mar 7 01:14:31.110602 kernel: SMBIOS 3.1.0 present. Mar 7 01:14:31.110609 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Mar 7 01:14:31.110620 kernel: Hypervisor detected: Microsoft Hyper-V Mar 7 01:14:31.110628 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Mar 7 01:14:31.110635 kernel: Hyper-V: Host Build 10.0.26102.1212-1-0 Mar 7 01:14:31.110645 kernel: Hyper-V: Nested features: 0x1e0101 Mar 7 01:14:31.110656 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Mar 7 01:14:31.110662 kernel: Hyper-V: Using hypercall for remote TLB flush Mar 7 01:14:31.110674 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Mar 7 01:14:31.110682 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Mar 7 01:14:31.110690 kernel: tsc: Marking TSC unstable due to running on Hyper-V Mar 7 01:14:31.110701 kernel: tsc: Detected 2593.905 MHz processor Mar 7 01:14:31.110709 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 7 01:14:31.110716 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 7 01:14:31.110727 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Mar 7 01:14:31.110737 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 7 01:14:31.110744 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 7 01:14:31.110755 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Mar 7 01:14:31.110762 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Mar 7 01:14:31.110769 kernel: Using GB pages for direct mapping Mar 7 01:14:31.110781 kernel: Secure boot disabled Mar 7 01:14:31.110792 kernel: ACPI: Early table checksum verification disabled Mar 7 01:14:31.110806 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Mar 7 01:14:31.110813 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:14:31.110821 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:14:31.110834 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 7 01:14:31.110842 kernel: ACPI: FACS 0x000000003FFFE000 000040 Mar 7 01:14:31.110854 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:14:31.110862 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:14:31.110876 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:14:31.110884 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:14:31.110897 kernel: ACPI: SRAT 0x000000003FFD4000 0001E0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:14:31.110910 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:14:31.110924 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Mar 7 01:14:31.110939 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Mar 7 01:14:31.110958 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Mar 7 01:14:31.110974 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Mar 7 01:14:31.110993 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Mar 7 01:14:31.111017 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Mar 7 01:14:31.111032 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Mar 7 01:14:31.111049 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd41df] Mar 7 01:14:31.111064 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Mar 7 01:14:31.111079 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 7 01:14:31.111093 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 7 01:14:31.111108 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Mar 7 01:14:31.111123 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Mar 7 01:14:31.111139 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Mar 7 01:14:31.111167 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Mar 7 01:14:31.111184 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Mar 7 01:14:31.111199 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Mar 7 01:14:31.111215 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Mar 7 01:14:31.111230 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Mar 7 01:14:31.111245 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Mar 7 01:14:31.111259 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Mar 7 01:14:31.111279 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Mar 7 01:14:31.111306 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Mar 7 01:14:31.111323 kernel: Zone ranges: Mar 7 01:14:31.111336 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 7 01:14:31.111352 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 7 01:14:31.111367 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Mar 7 01:14:31.111381 kernel: Movable zone start for each node Mar 7 01:14:31.111397 kernel: Early memory node ranges Mar 7 01:14:31.111414 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 7 01:14:31.111430 kernel: node 0: [mem 0x0000000000100000-0x000000000437dfff] Mar 7 01:14:31.111448 kernel: node 0: [mem 0x000000000477e000-0x000000003ff1efff] Mar 7 01:14:31.111465 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Mar 7 01:14:31.111481 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Mar 7 01:14:31.111497 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Mar 7 01:14:31.111514 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 7 01:14:31.111530 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 7 01:14:31.111558 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Mar 7 01:14:31.111576 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Mar 7 01:14:31.111592 kernel: ACPI: PM-Timer IO Port: 0x408 Mar 7 01:14:31.111609 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Mar 7 01:14:31.111623 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Mar 7 01:14:31.111637 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 7 01:14:31.111653 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 7 01:14:31.111673 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Mar 7 01:14:31.111693 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 7 01:14:31.111711 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Mar 7 01:14:31.111727 kernel: Booting paravirtualized kernel on Hyper-V Mar 7 01:14:31.111741 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 7 01:14:31.111764 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 7 01:14:31.111780 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Mar 7 01:14:31.111793 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Mar 7 01:14:31.111807 kernel: pcpu-alloc: [0] 0 1 Mar 7 01:14:31.111821 kernel: Hyper-V: PV spinlocks enabled Mar 7 01:14:31.111836 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 7 01:14:31.111857 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:14:31.111876 kernel: random: crng init done Mar 7 01:14:31.111899 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Mar 7 01:14:31.111917 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 7 01:14:31.111934 kernel: Fallback order for Node 0: 0 Mar 7 01:14:31.111948 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2061321 Mar 7 01:14:31.111963 kernel: Policy zone: Normal Mar 7 01:14:31.111978 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 01:14:31.111995 kernel: software IO TLB: area num 2. Mar 7 01:14:31.112011 kernel: Memory: 8066052K/8383228K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 316916K reserved, 0K cma-reserved) Mar 7 01:14:31.112025 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 7 01:14:31.112060 kernel: ftrace: allocating 37996 entries in 149 pages Mar 7 01:14:31.112076 kernel: ftrace: allocated 149 pages with 4 groups Mar 7 01:14:31.112093 kernel: Dynamic Preempt: voluntary Mar 7 01:14:31.112115 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 01:14:31.112134 kernel: rcu: RCU event tracing is enabled. Mar 7 01:14:31.112150 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 7 01:14:31.112167 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 01:14:31.112183 kernel: Rude variant of Tasks RCU enabled. Mar 7 01:14:31.112204 kernel: Tracing variant of Tasks RCU enabled. Mar 7 01:14:31.112225 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 01:14:31.112243 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 7 01:14:31.112260 kernel: Using NULL legacy PIC Mar 7 01:14:31.112276 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Mar 7 01:14:31.112293 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 01:14:31.112310 kernel: Console: colour dummy device 80x25 Mar 7 01:14:31.112322 kernel: printk: console [tty1] enabled Mar 7 01:14:31.112334 kernel: printk: console [ttyS0] enabled Mar 7 01:14:31.112350 kernel: printk: bootconsole [earlyser0] disabled Mar 7 01:14:31.112363 kernel: ACPI: Core revision 20230628 Mar 7 01:14:31.112377 kernel: Failed to register legacy timer interrupt Mar 7 01:14:31.112390 kernel: APIC: Switch to symmetric I/O mode setup Mar 7 01:14:31.112406 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 7 01:14:31.112421 kernel: Hyper-V: Using IPI hypercalls Mar 7 01:14:31.112435 kernel: APIC: send_IPI() replaced with hv_send_ipi() Mar 7 01:14:31.112451 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Mar 7 01:14:31.112468 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Mar 7 01:14:31.112487 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Mar 7 01:14:31.112501 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Mar 7 01:14:31.112515 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Mar 7 01:14:31.112537 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Mar 7 01:14:31.112569 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Mar 7 01:14:31.112584 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Mar 7 01:14:31.112596 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 7 01:14:31.112608 kernel: Spectre V2 : Mitigation: Retpolines Mar 7 01:14:31.112621 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 7 01:14:31.112633 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Mar 7 01:14:31.112647 kernel: RETBleed: Vulnerable Mar 7 01:14:31.112655 kernel: Speculative Store Bypass: Vulnerable Mar 7 01:14:31.112663 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 01:14:31.112671 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 01:14:31.112679 kernel: active return thunk: its_return_thunk Mar 7 01:14:31.112686 kernel: ITS: Mitigation: Aligned branch/return thunks Mar 7 01:14:31.112694 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 7 01:14:31.112702 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 7 01:14:31.112710 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 7 01:14:31.112718 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Mar 7 01:14:31.112728 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Mar 7 01:14:31.112736 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Mar 7 01:14:31.112744 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 7 01:14:31.112752 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Mar 7 01:14:31.112760 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Mar 7 01:14:31.112775 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Mar 7 01:14:31.112783 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Mar 7 01:14:31.112791 kernel: Freeing SMP alternatives memory: 32K Mar 7 01:14:31.112799 kernel: pid_max: default: 32768 minimum: 301 Mar 7 01:14:31.112806 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 7 01:14:31.112814 kernel: landlock: Up and running. Mar 7 01:14:31.112822 kernel: SELinux: Initializing. Mar 7 01:14:31.112833 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 7 01:14:31.112841 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 7 01:14:31.112849 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Mar 7 01:14:31.112857 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:14:31.112865 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:14:31.112873 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:14:31.112881 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Mar 7 01:14:31.112889 kernel: signal: max sigframe size: 3632 Mar 7 01:14:31.112897 kernel: rcu: Hierarchical SRCU implementation. Mar 7 01:14:31.112908 kernel: rcu: Max phase no-delay instances is 400. Mar 7 01:14:31.112916 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 7 01:14:31.112924 kernel: smp: Bringing up secondary CPUs ... Mar 7 01:14:31.112943 kernel: smpboot: x86: Booting SMP configuration: Mar 7 01:14:31.112951 kernel: .... node #0, CPUs: #1 Mar 7 01:14:31.112962 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Mar 7 01:14:31.112973 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Mar 7 01:14:31.112981 kernel: smp: Brought up 1 node, 2 CPUs Mar 7 01:14:31.112994 kernel: smpboot: Max logical packages: 1 Mar 7 01:14:31.113004 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Mar 7 01:14:31.113012 kernel: devtmpfs: initialized Mar 7 01:14:31.113020 kernel: x86/mm: Memory block size: 128MB Mar 7 01:14:31.113033 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Mar 7 01:14:31.113041 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 01:14:31.113054 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 7 01:14:31.113062 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 01:14:31.113074 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 01:14:31.113082 kernel: audit: initializing netlink subsys (disabled) Mar 7 01:14:31.113092 kernel: audit: type=2000 audit(1772846069.030:1): state=initialized audit_enabled=0 res=1 Mar 7 01:14:31.113104 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 01:14:31.113112 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 7 01:14:31.113125 kernel: cpuidle: using governor menu Mar 7 01:14:31.113135 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 01:14:31.113143 kernel: dca service started, version 1.12.1 Mar 7 01:14:31.113150 kernel: e820: reserve RAM buffer [mem 0x0437e000-0x07ffffff] Mar 7 01:14:31.113158 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Mar 7 01:14:31.113166 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 7 01:14:31.113177 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 01:14:31.113185 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 01:14:31.113198 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 01:14:31.113206 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 01:14:31.113214 kernel: ACPI: Added _OSI(Module Device) Mar 7 01:14:31.113226 kernel: ACPI: Added _OSI(Processor Device) Mar 7 01:14:31.113234 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 01:14:31.113244 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 7 01:14:31.113256 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 7 01:14:31.113265 kernel: ACPI: Interpreter enabled Mar 7 01:14:31.113277 kernel: ACPI: PM: (supports S0 S5) Mar 7 01:14:31.113285 kernel: ACPI: Using IOAPIC for interrupt routing Mar 7 01:14:31.113294 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 7 01:14:31.113306 kernel: PCI: Ignoring E820 reservations for host bridge windows Mar 7 01:14:31.113314 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Mar 7 01:14:31.113322 kernel: iommu: Default domain type: Translated Mar 7 01:14:31.113330 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 7 01:14:31.113342 kernel: efivars: Registered efivars operations Mar 7 01:14:31.113352 kernel: PCI: Using ACPI for IRQ routing Mar 7 01:14:31.113364 kernel: PCI: System does not support PCI Mar 7 01:14:31.113373 kernel: vgaarb: loaded Mar 7 01:14:31.113381 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Mar 7 01:14:31.113393 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 01:14:31.113401 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 01:14:31.113409 kernel: pnp: PnP ACPI init Mar 7 01:14:31.113422 kernel: pnp: PnP ACPI: found 3 devices Mar 7 01:14:31.113430 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 7 01:14:31.113442 kernel: NET: Registered PF_INET protocol family Mar 7 01:14:31.113452 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 7 01:14:31.113460 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Mar 7 01:14:31.113473 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 01:14:31.113481 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 7 01:14:31.113493 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Mar 7 01:14:31.113502 kernel: TCP: Hash tables configured (established 65536 bind 65536) Mar 7 01:14:31.113514 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 7 01:14:31.113523 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 7 01:14:31.113537 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 01:14:31.115565 kernel: NET: Registered PF_XDP protocol family Mar 7 01:14:31.115592 kernel: PCI: CLS 0 bytes, default 64 Mar 7 01:14:31.115612 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 7 01:14:31.115622 kernel: software IO TLB: mapped [mem 0x000000003a878000-0x000000003e878000] (64MB) Mar 7 01:14:31.115631 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 7 01:14:31.115644 kernel: Initialise system trusted keyrings Mar 7 01:14:31.115652 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Mar 7 01:14:31.115668 kernel: Key type asymmetric registered Mar 7 01:14:31.115677 kernel: Asymmetric key parser 'x509' registered Mar 7 01:14:31.115690 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 7 01:14:31.115698 kernel: io scheduler mq-deadline registered Mar 7 01:14:31.115709 kernel: io scheduler kyber registered Mar 7 01:14:31.115719 kernel: io scheduler bfq registered Mar 7 01:14:31.115727 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 7 01:14:31.115740 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 01:14:31.115749 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 7 01:14:31.115761 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Mar 7 01:14:31.115778 kernel: i8042: PNP: No PS/2 controller found. Mar 7 01:14:31.115940 kernel: rtc_cmos 00:02: registered as rtc0 Mar 7 01:14:31.116059 kernel: rtc_cmos 00:02: setting system clock to 2026-03-07T01:14:30 UTC (1772846070) Mar 7 01:14:31.116175 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Mar 7 01:14:31.116194 kernel: intel_pstate: CPU model not supported Mar 7 01:14:31.116209 kernel: efifb: probing for efifb Mar 7 01:14:31.116224 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 7 01:14:31.116243 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 7 01:14:31.116257 kernel: efifb: scrolling: redraw Mar 7 01:14:31.116272 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 7 01:14:31.116287 kernel: Console: switching to colour frame buffer device 128x48 Mar 7 01:14:31.116301 kernel: fb0: EFI VGA frame buffer device Mar 7 01:14:31.116316 kernel: pstore: Using crash dump compression: deflate Mar 7 01:14:31.116330 kernel: pstore: Registered efi_pstore as persistent store backend Mar 7 01:14:31.116343 kernel: NET: Registered PF_INET6 protocol family Mar 7 01:14:31.116358 kernel: Segment Routing with IPv6 Mar 7 01:14:31.116375 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 01:14:31.116389 kernel: NET: Registered PF_PACKET protocol family Mar 7 01:14:31.116404 kernel: Key type dns_resolver registered Mar 7 01:14:31.116418 kernel: IPI shorthand broadcast: enabled Mar 7 01:14:31.116432 kernel: sched_clock: Marking stable (927003100, 53646400)->(1218718800, -238069300) Mar 7 01:14:31.116448 kernel: registered taskstats version 1 Mar 7 01:14:31.116462 kernel: Loading compiled-in X.509 certificates Mar 7 01:14:31.116476 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: da286e6f6c247ee6f65a875c513de7da57782e90' Mar 7 01:14:31.116491 kernel: Key type .fscrypt registered Mar 7 01:14:31.116510 kernel: Key type fscrypt-provisioning registered Mar 7 01:14:31.116525 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 01:14:31.116540 kernel: ima: Allocated hash algorithm: sha1 Mar 7 01:14:31.118583 kernel: ima: No architecture policies found Mar 7 01:14:31.118602 kernel: clk: Disabling unused clocks Mar 7 01:14:31.118617 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 7 01:14:31.118631 kernel: Write protecting the kernel read-only data: 36864k Mar 7 01:14:31.118644 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 7 01:14:31.118658 kernel: Run /init as init process Mar 7 01:14:31.118677 kernel: with arguments: Mar 7 01:14:31.118691 kernel: /init Mar 7 01:14:31.118703 kernel: with environment: Mar 7 01:14:31.118715 kernel: HOME=/ Mar 7 01:14:31.118724 kernel: TERM=linux Mar 7 01:14:31.118736 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:14:31.118752 systemd[1]: Detected virtualization microsoft. Mar 7 01:14:31.118765 systemd[1]: Detected architecture x86-64. Mar 7 01:14:31.118777 systemd[1]: Running in initrd. Mar 7 01:14:31.118785 systemd[1]: No hostname configured, using default hostname. Mar 7 01:14:31.118797 systemd[1]: Hostname set to . Mar 7 01:14:31.118807 systemd[1]: Initializing machine ID from random generator. Mar 7 01:14:31.118816 systemd[1]: Queued start job for default target initrd.target. Mar 7 01:14:31.118829 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:14:31.118838 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:14:31.118847 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 01:14:31.118862 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:14:31.118872 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 01:14:31.118881 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 01:14:31.118895 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 01:14:31.118905 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 01:14:31.118917 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:14:31.118927 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:14:31.118940 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:14:31.118951 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:14:31.118959 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:14:31.118972 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:14:31.118981 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:14:31.118990 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:14:31.119003 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 01:14:31.119012 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 01:14:31.119025 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:14:31.119036 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:14:31.119047 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:14:31.119059 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:14:31.119067 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 01:14:31.119080 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:14:31.119089 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 01:14:31.119101 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 01:14:31.119110 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:14:31.119143 systemd-journald[177]: Collecting audit messages is disabled. Mar 7 01:14:31.119169 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:14:31.119181 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:14:31.119191 systemd-journald[177]: Journal started Mar 7 01:14:31.119218 systemd-journald[177]: Runtime Journal (/run/log/journal/fde60b17c8cf4689bb33d76792c89a6f) is 8.0M, max 158.7M, 150.7M free. Mar 7 01:14:31.136376 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:14:31.136882 systemd-modules-load[178]: Inserted module 'overlay' Mar 7 01:14:31.148061 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 01:14:31.155372 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:14:31.159411 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 01:14:31.183665 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 01:14:31.183702 kernel: Bridge firewalling registered Mar 7 01:14:31.185633 systemd-modules-load[178]: Inserted module 'br_netfilter' Mar 7 01:14:31.189060 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:14:31.194962 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:14:31.205700 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:14:31.213455 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:14:31.220867 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:14:31.221146 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:14:31.237849 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:14:31.247726 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:14:31.253403 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:14:31.272848 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:14:31.276433 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:14:31.286862 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:14:31.297726 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 01:14:31.308748 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:14:31.318429 dracut-cmdline[212]: dracut-dracut-053 Mar 7 01:14:31.322632 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:14:31.377799 systemd-resolved[214]: Positive Trust Anchors: Mar 7 01:14:31.377815 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:14:31.377872 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:14:31.407403 systemd-resolved[214]: Defaulting to hostname 'linux'. Mar 7 01:14:31.411580 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:14:31.419486 kernel: SCSI subsystem initialized Mar 7 01:14:31.419700 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:14:31.429569 kernel: Loading iSCSI transport class v2.0-870. Mar 7 01:14:31.441576 kernel: iscsi: registered transport (tcp) Mar 7 01:14:31.462892 kernel: iscsi: registered transport (qla4xxx) Mar 7 01:14:31.462976 kernel: QLogic iSCSI HBA Driver Mar 7 01:14:31.499005 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 01:14:31.508802 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 01:14:31.537061 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 01:14:31.537148 kernel: device-mapper: uevent: version 1.0.3 Mar 7 01:14:31.540670 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 7 01:14:31.582586 kernel: raid6: avx512x4 gen() 18200 MB/s Mar 7 01:14:31.601570 kernel: raid6: avx512x2 gen() 18500 MB/s Mar 7 01:14:31.620568 kernel: raid6: avx512x1 gen() 18657 MB/s Mar 7 01:14:31.640569 kernel: raid6: avx2x4 gen() 18572 MB/s Mar 7 01:14:31.659566 kernel: raid6: avx2x2 gen() 18590 MB/s Mar 7 01:14:31.679696 kernel: raid6: avx2x1 gen() 14131 MB/s Mar 7 01:14:31.679725 kernel: raid6: using algorithm avx512x1 gen() 18657 MB/s Mar 7 01:14:31.701931 kernel: raid6: .... xor() 26936 MB/s, rmw enabled Mar 7 01:14:31.701965 kernel: raid6: using avx512x2 recovery algorithm Mar 7 01:14:31.724576 kernel: xor: automatically using best checksumming function avx Mar 7 01:14:31.872575 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 01:14:31.882661 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:14:31.893744 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:14:31.908848 systemd-udevd[397]: Using default interface naming scheme 'v255'. Mar 7 01:14:31.913643 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:14:31.930749 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 01:14:31.944228 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Mar 7 01:14:31.972469 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:14:31.982717 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:14:32.029144 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:14:32.041797 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 01:14:32.060168 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 01:14:32.068927 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:14:32.072875 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:14:32.072962 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:14:32.084833 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 01:14:32.110330 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:14:32.130564 kernel: cryptd: max_cpu_qlen set to 1000 Mar 7 01:14:32.158612 kernel: AVX2 version of gcm_enc/dec engaged. Mar 7 01:14:32.158670 kernel: AES CTR mode by8 optimization enabled Mar 7 01:14:32.159573 kernel: hv_vmbus: Vmbus version:5.2 Mar 7 01:14:32.182992 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:14:32.186759 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:14:32.212683 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 7 01:14:32.212713 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 7 01:14:32.190724 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:14:32.198798 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:14:32.198950 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:14:32.205411 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:14:32.227131 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:14:32.233315 kernel: hv_vmbus: registering driver hid_hyperv Mar 7 01:14:32.246729 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Mar 7 01:14:32.246816 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 7 01:14:32.254774 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 7 01:14:32.254830 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Mar 7 01:14:32.261575 kernel: hv_vmbus: registering driver hv_netvsc Mar 7 01:14:32.276198 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 7 01:14:32.270582 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:14:32.270704 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:14:32.286110 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:14:32.295023 kernel: PTP clock support registered Mar 7 01:14:32.316333 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:14:32.328676 kernel: hv_vmbus: registering driver hv_storvsc Mar 7 01:14:32.330723 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:14:32.344431 kernel: hv_utils: Registering HyperV Utility Driver Mar 7 01:14:32.344490 kernel: hv_vmbus: registering driver hv_utils Mar 7 01:14:32.349213 kernel: scsi host1: storvsc_host_t Mar 7 01:14:32.349284 kernel: hv_utils: Heartbeat IC version 3.0 Mar 7 01:14:32.349304 kernel: scsi host0: storvsc_host_t Mar 7 01:14:32.349334 kernel: hv_utils: Shutdown IC version 3.2 Mar 7 01:14:32.356220 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 7 01:14:32.356281 kernel: hv_utils: TimeSync IC version 4.0 Mar 7 01:14:32.396390 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Mar 7 01:14:32.398816 systemd-resolved[214]: Clock change detected. Flushing caches. Mar 7 01:14:32.406278 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:14:32.425268 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 7 01:14:32.425561 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 7 01:14:32.427130 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 7 01:14:32.441266 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 7 01:14:32.441495 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 7 01:14:32.441626 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 7 01:14:32.447634 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 7 01:14:32.447873 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 7 01:14:32.448405 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 7 01:14:32.462757 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:14:32.462815 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 7 01:14:32.471510 kernel: hv_netvsc 6045bde0-1ac9-6045-bde0-1ac96045bde0 eth0: VF slot 1 added Mar 7 01:14:32.486207 kernel: hv_vmbus: registering driver hv_pci Mar 7 01:14:32.494717 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#74 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 7 01:14:32.494962 kernel: hv_pci e1f886c8-bd75-4b64-9a61-e55e5fa7135f: PCI VMBus probing: Using version 0x10004 Mar 7 01:14:32.503018 kernel: hv_pci e1f886c8-bd75-4b64-9a61-e55e5fa7135f: PCI host bridge to bus bd75:00 Mar 7 01:14:32.503331 kernel: pci_bus bd75:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Mar 7 01:14:32.504122 kernel: pci_bus bd75:00: No busn resource found for root bus, will use [bus 00-ff] Mar 7 01:14:32.512353 kernel: pci bd75:00:02.0: [15b3:1016] type 00 class 0x020000 Mar 7 01:14:32.519421 kernel: pci bd75:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Mar 7 01:14:32.519467 kernel: pci bd75:00:02.0: enabling Extended Tags Mar 7 01:14:32.534166 kernel: pci bd75:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at bd75:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Mar 7 01:14:32.541783 kernel: pci_bus bd75:00: busn_res: [bus 00-ff] end is updated to 00 Mar 7 01:14:32.542130 kernel: pci bd75:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Mar 7 01:14:32.711149 kernel: mlx5_core bd75:00:02.0: enabling device (0000 -> 0002) Mar 7 01:14:32.716126 kernel: mlx5_core bd75:00:02.0: firmware version: 14.30.5026 Mar 7 01:14:32.930578 kernel: hv_netvsc 6045bde0-1ac9-6045-bde0-1ac96045bde0 eth0: VF registering: eth1 Mar 7 01:14:32.930986 kernel: mlx5_core bd75:00:02.0 eth1: joined to eth0 Mar 7 01:14:32.935020 kernel: mlx5_core bd75:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Mar 7 01:14:32.946127 kernel: mlx5_core bd75:00:02.0 enP48501s1: renamed from eth1 Mar 7 01:14:33.114573 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (457) Mar 7 01:14:33.124957 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 7 01:14:33.138948 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 7 01:14:33.154167 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 7 01:14:33.223127 kernel: BTRFS: device fsid 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (476) Mar 7 01:14:33.236885 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 7 01:14:33.240725 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 7 01:14:33.260358 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 01:14:33.279133 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:14:33.288196 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:14:33.296146 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:14:34.300516 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:14:34.300578 disk-uuid[610]: The operation has completed successfully. Mar 7 01:14:34.388591 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 01:14:34.388708 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 01:14:34.415253 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 01:14:34.422366 sh[723]: Success Mar 7 01:14:34.457130 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 7 01:14:34.892685 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 01:14:34.908227 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 01:14:34.916861 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 01:14:34.954121 kernel: BTRFS info (device dm-0): first mount of filesystem 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 Mar 7 01:14:34.954172 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:14:34.960898 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 7 01:14:34.964155 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 7 01:14:34.967020 kernel: BTRFS info (device dm-0): using free space tree Mar 7 01:14:35.433231 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 01:14:35.437077 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 01:14:35.445331 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 01:14:35.451233 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 01:14:35.480129 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:14:35.480184 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:14:35.484767 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:14:35.536984 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:14:35.551126 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:14:35.552667 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:14:35.567384 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 7 01:14:35.573569 kernel: BTRFS info (device sda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:14:35.581261 systemd-networkd[897]: lo: Link UP Mar 7 01:14:35.581575 systemd-networkd[897]: lo: Gained carrier Mar 7 01:14:35.584083 systemd-networkd[897]: Enumeration completed Mar 7 01:14:35.584990 systemd-networkd[897]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:14:35.584996 systemd-networkd[897]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:14:35.585628 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:14:35.588616 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 01:14:35.589316 systemd[1]: Reached target network.target - Network. Mar 7 01:14:35.615398 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 01:14:35.656132 kernel: mlx5_core bd75:00:02.0 enP48501s1: Link up Mar 7 01:14:35.699129 kernel: hv_netvsc 6045bde0-1ac9-6045-bde0-1ac96045bde0 eth0: Data path switched to VF: enP48501s1 Mar 7 01:14:35.699561 systemd-networkd[897]: enP48501s1: Link UP Mar 7 01:14:35.699713 systemd-networkd[897]: eth0: Link UP Mar 7 01:14:35.699920 systemd-networkd[897]: eth0: Gained carrier Mar 7 01:14:35.699934 systemd-networkd[897]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:14:35.713929 systemd-networkd[897]: enP48501s1: Gained carrier Mar 7 01:14:35.754158 systemd-networkd[897]: eth0: DHCPv4 address 10.200.8.30/24, gateway 10.200.8.1 acquired from 168.63.129.16 Mar 7 01:14:37.016600 ignition[908]: Ignition 2.19.0 Mar 7 01:14:37.016613 ignition[908]: Stage: fetch-offline Mar 7 01:14:37.020547 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:14:37.016656 ignition[908]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:14:37.016666 ignition[908]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:14:37.016784 ignition[908]: parsed url from cmdline: "" Mar 7 01:14:37.016788 ignition[908]: no config URL provided Mar 7 01:14:37.016795 ignition[908]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:14:37.037442 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 7 01:14:37.016806 ignition[908]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:14:37.016812 ignition[908]: failed to fetch config: resource requires networking Mar 7 01:14:37.017174 ignition[908]: Ignition finished successfully Mar 7 01:14:37.055810 ignition[915]: Ignition 2.19.0 Mar 7 01:14:37.055818 ignition[915]: Stage: fetch Mar 7 01:14:37.056033 ignition[915]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:14:37.056049 ignition[915]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:14:37.056170 ignition[915]: parsed url from cmdline: "" Mar 7 01:14:37.056176 ignition[915]: no config URL provided Mar 7 01:14:37.056182 ignition[915]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:14:37.056189 ignition[915]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:14:37.056218 ignition[915]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 7 01:14:37.151457 ignition[915]: GET result: OK Mar 7 01:14:37.151565 ignition[915]: config has been read from IMDS userdata Mar 7 01:14:37.151594 ignition[915]: parsing config with SHA512: 05b7404b80f11babb9aaf0caef5bf7d854e410f0b618969aca32cc9f78b33f24810f56b29b73ec9adf49d0675229512f452f809fc088a42762cdc9438e472305 Mar 7 01:14:37.155690 unknown[915]: fetched base config from "system" Mar 7 01:14:37.156150 ignition[915]: fetch: fetch complete Mar 7 01:14:37.155697 unknown[915]: fetched base config from "system" Mar 7 01:14:37.156157 ignition[915]: fetch: fetch passed Mar 7 01:14:37.155701 unknown[915]: fetched user config from "azure" Mar 7 01:14:37.156209 ignition[915]: Ignition finished successfully Mar 7 01:14:37.169719 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 7 01:14:37.182256 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 01:14:37.203881 ignition[922]: Ignition 2.19.0 Mar 7 01:14:37.203894 ignition[922]: Stage: kargs Mar 7 01:14:37.204134 ignition[922]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:14:37.208368 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 01:14:37.204148 ignition[922]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:14:37.205054 ignition[922]: kargs: kargs passed Mar 7 01:14:37.205100 ignition[922]: Ignition finished successfully Mar 7 01:14:37.224372 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 01:14:37.244433 ignition[928]: Ignition 2.19.0 Mar 7 01:14:37.244447 ignition[928]: Stage: disks Mar 7 01:14:37.246967 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 01:14:37.244664 ignition[928]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:14:37.250560 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 01:14:37.244678 ignition[928]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:14:37.255137 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 01:14:37.245949 ignition[928]: disks: disks passed Mar 7 01:14:37.258834 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:14:37.245994 ignition[928]: Ignition finished successfully Mar 7 01:14:37.264600 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:14:37.272972 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:14:37.291387 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 01:14:37.401471 systemd-fsck[937]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 7 01:14:37.406369 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 01:14:37.417292 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 01:14:37.514124 kernel: EXT4-fs (sda9): mounted filesystem aab0506b-de72-4dd2-9393-24d7958f49a5 r/w with ordered data mode. Quota mode: none. Mar 7 01:14:37.514292 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 01:14:37.515095 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 01:14:37.574246 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:14:37.594129 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (948) Mar 7 01:14:37.601943 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:14:37.602034 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:14:37.604670 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:14:37.612123 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:14:37.630219 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 01:14:37.634971 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 7 01:14:37.644027 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 01:14:37.644073 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:14:37.647010 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:14:37.662499 systemd-networkd[897]: eth0: Gained IPv6LL Mar 7 01:14:37.664761 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 01:14:37.677867 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 01:14:38.661939 coreos-metadata[965]: Mar 07 01:14:38.661 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 7 01:14:38.668036 coreos-metadata[965]: Mar 07 01:14:38.667 INFO Fetch successful Mar 7 01:14:38.668036 coreos-metadata[965]: Mar 07 01:14:38.668 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 7 01:14:38.685862 coreos-metadata[965]: Mar 07 01:14:38.685 INFO Fetch successful Mar 7 01:14:38.737800 coreos-metadata[965]: Mar 07 01:14:38.737 INFO wrote hostname ci-4081.3.6-n-1070eafa86 to /sysroot/etc/hostname Mar 7 01:14:38.742759 initrd-setup-root[977]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 01:14:38.747342 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 7 01:14:38.776967 initrd-setup-root[985]: cut: /sysroot/etc/group: No such file or directory Mar 7 01:14:38.808067 initrd-setup-root[992]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 01:14:38.813831 initrd-setup-root[999]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 01:14:39.895053 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 01:14:39.906204 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 01:14:39.915289 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 01:14:39.922190 kernel: BTRFS info (device sda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:14:39.925536 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 01:14:39.956656 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 01:14:39.963040 ignition[1067]: INFO : Ignition 2.19.0 Mar 7 01:14:39.963040 ignition[1067]: INFO : Stage: mount Mar 7 01:14:39.963040 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:14:39.963040 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:14:39.978479 ignition[1067]: INFO : mount: mount passed Mar 7 01:14:39.978479 ignition[1067]: INFO : Ignition finished successfully Mar 7 01:14:39.966670 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 01:14:39.988552 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 01:14:39.995862 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:14:40.028131 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1079) Mar 7 01:14:40.033119 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:14:40.033167 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:14:40.038277 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:14:40.046311 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:14:40.047774 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:14:40.078699 ignition[1096]: INFO : Ignition 2.19.0 Mar 7 01:14:40.078699 ignition[1096]: INFO : Stage: files Mar 7 01:14:40.087095 ignition[1096]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:14:40.087095 ignition[1096]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:14:40.087095 ignition[1096]: DEBUG : files: compiled without relabeling support, skipping Mar 7 01:14:40.113178 ignition[1096]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 01:14:40.113178 ignition[1096]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 01:14:40.287538 ignition[1096]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 01:14:40.291955 ignition[1096]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 01:14:40.291955 ignition[1096]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 01:14:40.288018 unknown[1096]: wrote ssh authorized keys file for user: core Mar 7 01:14:40.302919 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:14:40.302919 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 7 01:14:40.330645 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 7 01:14:40.387798 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:14:40.393984 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 7 01:14:40.393984 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 01:14:40.393984 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:14:40.393984 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:14:40.393984 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:14:40.393984 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:14:40.393984 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:14:40.393984 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:14:40.393984 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:14:40.393984 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:14:40.393984 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 01:14:40.393984 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 01:14:40.393984 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 01:14:40.393984 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 7 01:14:40.954731 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 7 01:14:42.381030 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 01:14:42.381030 ignition[1096]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 7 01:14:42.408464 ignition[1096]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:14:42.414396 ignition[1096]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:14:42.414396 ignition[1096]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 7 01:14:42.414396 ignition[1096]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 7 01:14:42.414396 ignition[1096]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 01:14:42.414396 ignition[1096]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:14:42.414396 ignition[1096]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:14:42.414396 ignition[1096]: INFO : files: files passed Mar 7 01:14:42.414396 ignition[1096]: INFO : Ignition finished successfully Mar 7 01:14:42.421424 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 01:14:42.444402 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 01:14:42.455281 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 01:14:42.468574 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 01:14:42.471136 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 01:14:42.483509 initrd-setup-root-after-ignition[1124]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:14:42.483509 initrd-setup-root-after-ignition[1124]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:14:42.488721 initrd-setup-root-after-ignition[1128]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:14:42.487747 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:14:42.504909 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 01:14:42.519282 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 01:14:42.550070 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 01:14:42.550233 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 01:14:42.553965 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 01:14:42.554913 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 01:14:42.571613 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 01:14:42.583317 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 01:14:42.597564 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:14:42.609372 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 01:14:42.623480 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:14:42.627383 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:14:42.634173 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 01:14:42.639953 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 01:14:42.640094 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:14:42.652362 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 01:14:42.655902 systemd[1]: Stopped target basic.target - Basic System. Mar 7 01:14:42.661430 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 01:14:42.664868 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:14:42.670895 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 01:14:42.674354 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 01:14:42.681077 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:14:42.691515 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 01:14:42.697857 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 01:14:42.704371 systemd[1]: Stopped target swap.target - Swaps. Mar 7 01:14:42.714761 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 01:14:42.714908 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:14:42.718784 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:14:42.724004 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:14:42.727586 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 01:14:42.730298 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:14:42.734157 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 01:14:42.737163 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 01:14:42.750223 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 01:14:42.750368 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:14:42.756775 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 01:14:42.756886 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 01:14:42.774886 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 7 01:14:42.778191 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 7 01:14:42.796541 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 01:14:42.803035 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 01:14:42.805557 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 01:14:42.805749 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:14:42.809436 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 01:14:42.809546 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:14:42.831124 ignition[1148]: INFO : Ignition 2.19.0 Mar 7 01:14:42.831124 ignition[1148]: INFO : Stage: umount Mar 7 01:14:42.831124 ignition[1148]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:14:42.831124 ignition[1148]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:14:42.854532 ignition[1148]: INFO : umount: umount passed Mar 7 01:14:42.854532 ignition[1148]: INFO : Ignition finished successfully Mar 7 01:14:42.833620 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 01:14:42.833739 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 01:14:42.840494 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 01:14:42.840588 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 01:14:42.851041 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 01:14:42.851095 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 01:14:42.858861 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 01:14:42.858927 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 01:14:42.867316 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 7 01:14:42.867381 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 7 01:14:42.873043 systemd[1]: Stopped target network.target - Network. Mar 7 01:14:42.879803 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 01:14:42.879889 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:14:42.882377 systemd[1]: Stopped target paths.target - Path Units. Mar 7 01:14:42.882819 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 01:14:42.898397 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:14:42.906890 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 01:14:42.909682 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 01:14:42.912456 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 01:14:42.912514 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:14:42.926521 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 01:14:42.926585 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:14:42.932011 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 01:14:42.932080 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 01:14:42.938078 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 01:14:42.938149 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 01:14:42.941628 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 01:14:42.951734 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 01:14:42.956119 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 01:14:42.958155 systemd-networkd[897]: eth0: DHCPv6 lease lost Mar 7 01:14:42.960409 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 01:14:42.960521 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 01:14:42.969050 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 01:14:42.969118 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:14:42.993494 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 01:14:43.010628 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 01:14:43.010701 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:14:43.014454 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:14:43.018307 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 01:14:43.018414 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 01:14:43.048516 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 01:14:43.048654 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:14:43.052343 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 01:14:43.052389 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 01:14:43.052492 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 01:14:43.052529 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:14:43.056901 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 01:14:43.057043 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:14:43.058658 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 01:14:43.058736 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 01:14:43.076994 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 01:14:43.077030 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:14:43.120282 kernel: hv_netvsc 6045bde0-1ac9-6045-bde0-1ac96045bde0 eth0: Data path switched from VF: enP48501s1 Mar 7 01:14:43.080157 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 01:14:43.080202 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:14:43.083423 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 01:14:43.083470 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 01:14:43.089735 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:14:43.089786 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:14:43.119282 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 01:14:43.129802 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 01:14:43.129882 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:14:43.154835 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 7 01:14:43.155032 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:14:43.161988 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 01:14:43.162043 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:14:43.172797 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:14:43.176259 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:14:43.186228 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 01:14:43.188879 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 01:14:43.194650 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 01:14:43.197885 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 01:14:43.391293 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 01:14:43.391419 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 01:14:43.397304 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 01:14:43.402555 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 01:14:43.402621 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 01:14:43.415307 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 01:14:43.843063 systemd[1]: Switching root. Mar 7 01:14:43.885138 systemd-journald[177]: Journal stopped Mar 7 01:14:31.110401 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 6 22:58:19 -00 2026 Mar 7 01:14:31.110428 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:14:31.110443 kernel: BIOS-provided physical RAM map: Mar 7 01:14:31.110450 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 7 01:14:31.110456 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Mar 7 01:14:31.110464 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000000437dfff] usable Mar 7 01:14:31.110473 kernel: BIOS-e820: [mem 0x000000000437e000-0x000000000477dfff] reserved Mar 7 01:14:31.110480 kernel: BIOS-e820: [mem 0x000000000477e000-0x000000003ff1efff] usable Mar 7 01:14:31.110488 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ff73fff] type 20 Mar 7 01:14:31.110499 kernel: BIOS-e820: [mem 0x000000003ff74000-0x000000003ffc8fff] reserved Mar 7 01:14:31.110506 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Mar 7 01:14:31.110512 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Mar 7 01:14:31.110523 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Mar 7 01:14:31.110529 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Mar 7 01:14:31.110543 kernel: printk: bootconsole [earlyser0] enabled Mar 7 01:14:31.110569 kernel: NX (Execute Disable) protection: active Mar 7 01:14:31.110576 kernel: APIC: Static calls initialized Mar 7 01:14:31.110583 kernel: efi: EFI v2.7 by Microsoft Mar 7 01:14:31.110595 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3f421418 Mar 7 01:14:31.110602 kernel: SMBIOS 3.1.0 present. Mar 7 01:14:31.110609 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Mar 7 01:14:31.110620 kernel: Hypervisor detected: Microsoft Hyper-V Mar 7 01:14:31.110628 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Mar 7 01:14:31.110635 kernel: Hyper-V: Host Build 10.0.26102.1212-1-0 Mar 7 01:14:31.110645 kernel: Hyper-V: Nested features: 0x1e0101 Mar 7 01:14:31.110656 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Mar 7 01:14:31.110662 kernel: Hyper-V: Using hypercall for remote TLB flush Mar 7 01:14:31.110674 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Mar 7 01:14:31.110682 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Mar 7 01:14:31.110690 kernel: tsc: Marking TSC unstable due to running on Hyper-V Mar 7 01:14:31.110701 kernel: tsc: Detected 2593.905 MHz processor Mar 7 01:14:31.110709 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 7 01:14:31.110716 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 7 01:14:31.110727 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Mar 7 01:14:31.110737 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 7 01:14:31.110744 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 7 01:14:31.110755 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Mar 7 01:14:31.110762 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Mar 7 01:14:31.110769 kernel: Using GB pages for direct mapping Mar 7 01:14:31.110781 kernel: Secure boot disabled Mar 7 01:14:31.110792 kernel: ACPI: Early table checksum verification disabled Mar 7 01:14:31.110806 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Mar 7 01:14:31.110813 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:14:31.110821 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:14:31.110834 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 7 01:14:31.110842 kernel: ACPI: FACS 0x000000003FFFE000 000040 Mar 7 01:14:31.110854 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:14:31.110862 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:14:31.110876 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:14:31.110884 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:14:31.110897 kernel: ACPI: SRAT 0x000000003FFD4000 0001E0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:14:31.110910 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:14:31.110924 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Mar 7 01:14:31.110939 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Mar 7 01:14:31.110958 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Mar 7 01:14:31.110974 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Mar 7 01:14:31.110993 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Mar 7 01:14:31.111017 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Mar 7 01:14:31.111032 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Mar 7 01:14:31.111049 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd41df] Mar 7 01:14:31.111064 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Mar 7 01:14:31.111079 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 7 01:14:31.111093 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 7 01:14:31.111108 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Mar 7 01:14:31.111123 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Mar 7 01:14:31.111139 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Mar 7 01:14:31.111167 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Mar 7 01:14:31.111184 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Mar 7 01:14:31.111199 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Mar 7 01:14:31.111215 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Mar 7 01:14:31.111230 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Mar 7 01:14:31.111245 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Mar 7 01:14:31.111259 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Mar 7 01:14:31.111279 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Mar 7 01:14:31.111306 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Mar 7 01:14:31.111323 kernel: Zone ranges: Mar 7 01:14:31.111336 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 7 01:14:31.111352 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 7 01:14:31.111367 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Mar 7 01:14:31.111381 kernel: Movable zone start for each node Mar 7 01:14:31.111397 kernel: Early memory node ranges Mar 7 01:14:31.111414 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 7 01:14:31.111430 kernel: node 0: [mem 0x0000000000100000-0x000000000437dfff] Mar 7 01:14:31.111448 kernel: node 0: [mem 0x000000000477e000-0x000000003ff1efff] Mar 7 01:14:31.111465 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Mar 7 01:14:31.111481 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Mar 7 01:14:31.111497 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Mar 7 01:14:31.111514 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 7 01:14:31.111530 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 7 01:14:31.111558 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Mar 7 01:14:31.111576 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Mar 7 01:14:31.111592 kernel: ACPI: PM-Timer IO Port: 0x408 Mar 7 01:14:31.111609 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Mar 7 01:14:31.111623 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Mar 7 01:14:31.111637 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 7 01:14:31.111653 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 7 01:14:31.111673 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Mar 7 01:14:31.111693 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 7 01:14:31.111711 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Mar 7 01:14:31.111727 kernel: Booting paravirtualized kernel on Hyper-V Mar 7 01:14:31.111741 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 7 01:14:31.111764 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 7 01:14:31.111780 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Mar 7 01:14:31.111793 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Mar 7 01:14:31.111807 kernel: pcpu-alloc: [0] 0 1 Mar 7 01:14:31.111821 kernel: Hyper-V: PV spinlocks enabled Mar 7 01:14:31.111836 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 7 01:14:31.111857 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:14:31.111876 kernel: random: crng init done Mar 7 01:14:31.111899 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Mar 7 01:14:31.111917 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 7 01:14:31.111934 kernel: Fallback order for Node 0: 0 Mar 7 01:14:31.111948 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2061321 Mar 7 01:14:31.111963 kernel: Policy zone: Normal Mar 7 01:14:31.111978 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 01:14:31.111995 kernel: software IO TLB: area num 2. Mar 7 01:14:31.112011 kernel: Memory: 8066052K/8383228K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 316916K reserved, 0K cma-reserved) Mar 7 01:14:31.112025 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 7 01:14:31.112060 kernel: ftrace: allocating 37996 entries in 149 pages Mar 7 01:14:31.112076 kernel: ftrace: allocated 149 pages with 4 groups Mar 7 01:14:31.112093 kernel: Dynamic Preempt: voluntary Mar 7 01:14:31.112115 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 01:14:31.112134 kernel: rcu: RCU event tracing is enabled. Mar 7 01:14:31.112150 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 7 01:14:31.112167 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 01:14:31.112183 kernel: Rude variant of Tasks RCU enabled. Mar 7 01:14:31.112204 kernel: Tracing variant of Tasks RCU enabled. Mar 7 01:14:31.112225 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 01:14:31.112243 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 7 01:14:31.112260 kernel: Using NULL legacy PIC Mar 7 01:14:31.112276 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Mar 7 01:14:31.112293 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 01:14:31.112310 kernel: Console: colour dummy device 80x25 Mar 7 01:14:31.112322 kernel: printk: console [tty1] enabled Mar 7 01:14:31.112334 kernel: printk: console [ttyS0] enabled Mar 7 01:14:31.112350 kernel: printk: bootconsole [earlyser0] disabled Mar 7 01:14:31.112363 kernel: ACPI: Core revision 20230628 Mar 7 01:14:31.112377 kernel: Failed to register legacy timer interrupt Mar 7 01:14:31.112390 kernel: APIC: Switch to symmetric I/O mode setup Mar 7 01:14:31.112406 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 7 01:14:31.112421 kernel: Hyper-V: Using IPI hypercalls Mar 7 01:14:31.112435 kernel: APIC: send_IPI() replaced with hv_send_ipi() Mar 7 01:14:31.112451 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Mar 7 01:14:31.112468 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Mar 7 01:14:31.112487 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Mar 7 01:14:31.112501 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Mar 7 01:14:31.112515 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Mar 7 01:14:31.112537 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Mar 7 01:14:31.112569 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Mar 7 01:14:31.112584 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Mar 7 01:14:31.112596 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 7 01:14:31.112608 kernel: Spectre V2 : Mitigation: Retpolines Mar 7 01:14:31.112621 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 7 01:14:31.112633 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Mar 7 01:14:31.112647 kernel: RETBleed: Vulnerable Mar 7 01:14:31.112655 kernel: Speculative Store Bypass: Vulnerable Mar 7 01:14:31.112663 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 01:14:31.112671 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 01:14:31.112679 kernel: active return thunk: its_return_thunk Mar 7 01:14:31.112686 kernel: ITS: Mitigation: Aligned branch/return thunks Mar 7 01:14:31.112694 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 7 01:14:31.112702 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 7 01:14:31.112710 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 7 01:14:31.112718 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Mar 7 01:14:31.112728 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Mar 7 01:14:31.112736 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Mar 7 01:14:31.112744 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 7 01:14:31.112752 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Mar 7 01:14:31.112760 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Mar 7 01:14:31.112775 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Mar 7 01:14:31.112783 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Mar 7 01:14:31.112791 kernel: Freeing SMP alternatives memory: 32K Mar 7 01:14:31.112799 kernel: pid_max: default: 32768 minimum: 301 Mar 7 01:14:31.112806 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 7 01:14:31.112814 kernel: landlock: Up and running. Mar 7 01:14:31.112822 kernel: SELinux: Initializing. Mar 7 01:14:31.112833 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 7 01:14:31.112841 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 7 01:14:31.112849 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Mar 7 01:14:31.112857 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:14:31.112865 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:14:31.112873 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:14:31.112881 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Mar 7 01:14:31.112889 kernel: signal: max sigframe size: 3632 Mar 7 01:14:31.112897 kernel: rcu: Hierarchical SRCU implementation. Mar 7 01:14:31.112908 kernel: rcu: Max phase no-delay instances is 400. Mar 7 01:14:31.112916 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 7 01:14:31.112924 kernel: smp: Bringing up secondary CPUs ... Mar 7 01:14:31.112943 kernel: smpboot: x86: Booting SMP configuration: Mar 7 01:14:31.112951 kernel: .... node #0, CPUs: #1 Mar 7 01:14:31.112962 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Mar 7 01:14:31.112973 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Mar 7 01:14:31.112981 kernel: smp: Brought up 1 node, 2 CPUs Mar 7 01:14:31.112994 kernel: smpboot: Max logical packages: 1 Mar 7 01:14:31.113004 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Mar 7 01:14:31.113012 kernel: devtmpfs: initialized Mar 7 01:14:31.113020 kernel: x86/mm: Memory block size: 128MB Mar 7 01:14:31.113033 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Mar 7 01:14:31.113041 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 01:14:31.113054 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 7 01:14:31.113062 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 01:14:31.113074 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 01:14:31.113082 kernel: audit: initializing netlink subsys (disabled) Mar 7 01:14:31.113092 kernel: audit: type=2000 audit(1772846069.030:1): state=initialized audit_enabled=0 res=1 Mar 7 01:14:31.113104 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 01:14:31.113112 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 7 01:14:31.113125 kernel: cpuidle: using governor menu Mar 7 01:14:31.113135 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 01:14:31.113143 kernel: dca service started, version 1.12.1 Mar 7 01:14:31.113150 kernel: e820: reserve RAM buffer [mem 0x0437e000-0x07ffffff] Mar 7 01:14:31.113158 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Mar 7 01:14:31.113166 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 7 01:14:31.113177 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 01:14:31.113185 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 01:14:31.113198 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 01:14:31.113206 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 01:14:31.113214 kernel: ACPI: Added _OSI(Module Device) Mar 7 01:14:31.113226 kernel: ACPI: Added _OSI(Processor Device) Mar 7 01:14:31.113234 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 01:14:31.113244 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 7 01:14:31.113256 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 7 01:14:31.113265 kernel: ACPI: Interpreter enabled Mar 7 01:14:31.113277 kernel: ACPI: PM: (supports S0 S5) Mar 7 01:14:31.113285 kernel: ACPI: Using IOAPIC for interrupt routing Mar 7 01:14:31.113294 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 7 01:14:31.113306 kernel: PCI: Ignoring E820 reservations for host bridge windows Mar 7 01:14:31.113314 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Mar 7 01:14:31.113322 kernel: iommu: Default domain type: Translated Mar 7 01:14:31.113330 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 7 01:14:31.113342 kernel: efivars: Registered efivars operations Mar 7 01:14:31.113352 kernel: PCI: Using ACPI for IRQ routing Mar 7 01:14:31.113364 kernel: PCI: System does not support PCI Mar 7 01:14:31.113373 kernel: vgaarb: loaded Mar 7 01:14:31.113381 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Mar 7 01:14:31.113393 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 01:14:31.113401 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 01:14:31.113409 kernel: pnp: PnP ACPI init Mar 7 01:14:31.113422 kernel: pnp: PnP ACPI: found 3 devices Mar 7 01:14:31.113430 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 7 01:14:31.113442 kernel: NET: Registered PF_INET protocol family Mar 7 01:14:31.113452 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 7 01:14:31.113460 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Mar 7 01:14:31.113473 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 01:14:31.113481 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 7 01:14:31.113493 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Mar 7 01:14:31.113502 kernel: TCP: Hash tables configured (established 65536 bind 65536) Mar 7 01:14:31.113514 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 7 01:14:31.113523 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 7 01:14:31.113537 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 01:14:31.115565 kernel: NET: Registered PF_XDP protocol family Mar 7 01:14:31.115592 kernel: PCI: CLS 0 bytes, default 64 Mar 7 01:14:31.115612 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 7 01:14:31.115622 kernel: software IO TLB: mapped [mem 0x000000003a878000-0x000000003e878000] (64MB) Mar 7 01:14:31.115631 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 7 01:14:31.115644 kernel: Initialise system trusted keyrings Mar 7 01:14:31.115652 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Mar 7 01:14:31.115668 kernel: Key type asymmetric registered Mar 7 01:14:31.115677 kernel: Asymmetric key parser 'x509' registered Mar 7 01:14:31.115690 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 7 01:14:31.115698 kernel: io scheduler mq-deadline registered Mar 7 01:14:31.115709 kernel: io scheduler kyber registered Mar 7 01:14:31.115719 kernel: io scheduler bfq registered Mar 7 01:14:31.115727 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 7 01:14:31.115740 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 01:14:31.115749 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 7 01:14:31.115761 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Mar 7 01:14:31.115778 kernel: i8042: PNP: No PS/2 controller found. Mar 7 01:14:31.115940 kernel: rtc_cmos 00:02: registered as rtc0 Mar 7 01:14:31.116059 kernel: rtc_cmos 00:02: setting system clock to 2026-03-07T01:14:30 UTC (1772846070) Mar 7 01:14:31.116175 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Mar 7 01:14:31.116194 kernel: intel_pstate: CPU model not supported Mar 7 01:14:31.116209 kernel: efifb: probing for efifb Mar 7 01:14:31.116224 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 7 01:14:31.116243 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 7 01:14:31.116257 kernel: efifb: scrolling: redraw Mar 7 01:14:31.116272 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 7 01:14:31.116287 kernel: Console: switching to colour frame buffer device 128x48 Mar 7 01:14:31.116301 kernel: fb0: EFI VGA frame buffer device Mar 7 01:14:31.116316 kernel: pstore: Using crash dump compression: deflate Mar 7 01:14:31.116330 kernel: pstore: Registered efi_pstore as persistent store backend Mar 7 01:14:31.116343 kernel: NET: Registered PF_INET6 protocol family Mar 7 01:14:31.116358 kernel: Segment Routing with IPv6 Mar 7 01:14:31.116375 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 01:14:31.116389 kernel: NET: Registered PF_PACKET protocol family Mar 7 01:14:31.116404 kernel: Key type dns_resolver registered Mar 7 01:14:31.116418 kernel: IPI shorthand broadcast: enabled Mar 7 01:14:31.116432 kernel: sched_clock: Marking stable (927003100, 53646400)->(1218718800, -238069300) Mar 7 01:14:31.116448 kernel: registered taskstats version 1 Mar 7 01:14:31.116462 kernel: Loading compiled-in X.509 certificates Mar 7 01:14:31.116476 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: da286e6f6c247ee6f65a875c513de7da57782e90' Mar 7 01:14:31.116491 kernel: Key type .fscrypt registered Mar 7 01:14:31.116510 kernel: Key type fscrypt-provisioning registered Mar 7 01:14:31.116525 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 01:14:31.116540 kernel: ima: Allocated hash algorithm: sha1 Mar 7 01:14:31.118583 kernel: ima: No architecture policies found Mar 7 01:14:31.118602 kernel: clk: Disabling unused clocks Mar 7 01:14:31.118617 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 7 01:14:31.118631 kernel: Write protecting the kernel read-only data: 36864k Mar 7 01:14:31.118644 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 7 01:14:31.118658 kernel: Run /init as init process Mar 7 01:14:31.118677 kernel: with arguments: Mar 7 01:14:31.118691 kernel: /init Mar 7 01:14:31.118703 kernel: with environment: Mar 7 01:14:31.118715 kernel: HOME=/ Mar 7 01:14:31.118724 kernel: TERM=linux Mar 7 01:14:31.118736 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:14:31.118752 systemd[1]: Detected virtualization microsoft. Mar 7 01:14:31.118765 systemd[1]: Detected architecture x86-64. Mar 7 01:14:31.118777 systemd[1]: Running in initrd. Mar 7 01:14:31.118785 systemd[1]: No hostname configured, using default hostname. Mar 7 01:14:31.118797 systemd[1]: Hostname set to . Mar 7 01:14:31.118807 systemd[1]: Initializing machine ID from random generator. Mar 7 01:14:31.118816 systemd[1]: Queued start job for default target initrd.target. Mar 7 01:14:31.118829 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:14:31.118838 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:14:31.118847 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 01:14:31.118862 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:14:31.118872 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 01:14:31.118881 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 01:14:31.118895 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 01:14:31.118905 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 01:14:31.118917 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:14:31.118927 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:14:31.118940 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:14:31.118951 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:14:31.118959 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:14:31.118972 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:14:31.118981 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:14:31.118990 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:14:31.119003 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 01:14:31.119012 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 01:14:31.119025 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:14:31.119036 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:14:31.119047 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:14:31.119059 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:14:31.119067 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 01:14:31.119080 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:14:31.119089 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 01:14:31.119101 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 01:14:31.119110 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:14:31.119143 systemd-journald[177]: Collecting audit messages is disabled. Mar 7 01:14:31.119169 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:14:31.119181 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:14:31.119191 systemd-journald[177]: Journal started Mar 7 01:14:31.119218 systemd-journald[177]: Runtime Journal (/run/log/journal/fde60b17c8cf4689bb33d76792c89a6f) is 8.0M, max 158.7M, 150.7M free. Mar 7 01:14:31.136376 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:14:31.136882 systemd-modules-load[178]: Inserted module 'overlay' Mar 7 01:14:31.148061 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 01:14:31.155372 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:14:31.159411 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 01:14:31.183665 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 01:14:31.183702 kernel: Bridge firewalling registered Mar 7 01:14:31.185633 systemd-modules-load[178]: Inserted module 'br_netfilter' Mar 7 01:14:31.189060 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:14:31.194962 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:14:31.205700 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:14:31.213455 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:14:31.220867 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:14:31.221146 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:14:31.237849 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:14:31.247726 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:14:31.253403 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:14:31.272848 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:14:31.276433 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:14:31.286862 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:14:31.297726 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 01:14:31.308748 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:14:31.318429 dracut-cmdline[212]: dracut-dracut-053 Mar 7 01:14:31.322632 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:14:31.377799 systemd-resolved[214]: Positive Trust Anchors: Mar 7 01:14:31.377815 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:14:31.377872 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:14:31.407403 systemd-resolved[214]: Defaulting to hostname 'linux'. Mar 7 01:14:31.411580 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:14:31.419486 kernel: SCSI subsystem initialized Mar 7 01:14:31.419700 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:14:31.429569 kernel: Loading iSCSI transport class v2.0-870. Mar 7 01:14:31.441576 kernel: iscsi: registered transport (tcp) Mar 7 01:14:31.462892 kernel: iscsi: registered transport (qla4xxx) Mar 7 01:14:31.462976 kernel: QLogic iSCSI HBA Driver Mar 7 01:14:31.499005 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 01:14:31.508802 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 01:14:31.537061 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 01:14:31.537148 kernel: device-mapper: uevent: version 1.0.3 Mar 7 01:14:31.540670 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 7 01:14:31.582586 kernel: raid6: avx512x4 gen() 18200 MB/s Mar 7 01:14:31.601570 kernel: raid6: avx512x2 gen() 18500 MB/s Mar 7 01:14:31.620568 kernel: raid6: avx512x1 gen() 18657 MB/s Mar 7 01:14:31.640569 kernel: raid6: avx2x4 gen() 18572 MB/s Mar 7 01:14:31.659566 kernel: raid6: avx2x2 gen() 18590 MB/s Mar 7 01:14:31.679696 kernel: raid6: avx2x1 gen() 14131 MB/s Mar 7 01:14:31.679725 kernel: raid6: using algorithm avx512x1 gen() 18657 MB/s Mar 7 01:14:31.701931 kernel: raid6: .... xor() 26936 MB/s, rmw enabled Mar 7 01:14:31.701965 kernel: raid6: using avx512x2 recovery algorithm Mar 7 01:14:31.724576 kernel: xor: automatically using best checksumming function avx Mar 7 01:14:31.872575 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 01:14:31.882661 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:14:31.893744 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:14:31.908848 systemd-udevd[397]: Using default interface naming scheme 'v255'. Mar 7 01:14:31.913643 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:14:31.930749 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 01:14:31.944228 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Mar 7 01:14:31.972469 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:14:31.982717 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:14:32.029144 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:14:32.041797 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 01:14:32.060168 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 01:14:32.068927 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:14:32.072875 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:14:32.072962 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:14:32.084833 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 01:14:32.110330 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:14:32.130564 kernel: cryptd: max_cpu_qlen set to 1000 Mar 7 01:14:32.158612 kernel: AVX2 version of gcm_enc/dec engaged. Mar 7 01:14:32.158670 kernel: AES CTR mode by8 optimization enabled Mar 7 01:14:32.159573 kernel: hv_vmbus: Vmbus version:5.2 Mar 7 01:14:32.182992 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:14:32.186759 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:14:32.212683 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 7 01:14:32.212713 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 7 01:14:32.190724 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:14:32.198798 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:14:32.198950 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:14:32.205411 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:14:32.227131 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:14:32.233315 kernel: hv_vmbus: registering driver hid_hyperv Mar 7 01:14:32.246729 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Mar 7 01:14:32.246816 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 7 01:14:32.254774 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 7 01:14:32.254830 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Mar 7 01:14:32.261575 kernel: hv_vmbus: registering driver hv_netvsc Mar 7 01:14:32.276198 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 7 01:14:32.270582 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:14:32.270704 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:14:32.286110 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:14:32.295023 kernel: PTP clock support registered Mar 7 01:14:32.316333 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:14:32.328676 kernel: hv_vmbus: registering driver hv_storvsc Mar 7 01:14:32.330723 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:14:32.344431 kernel: hv_utils: Registering HyperV Utility Driver Mar 7 01:14:32.344490 kernel: hv_vmbus: registering driver hv_utils Mar 7 01:14:32.349213 kernel: scsi host1: storvsc_host_t Mar 7 01:14:32.349284 kernel: hv_utils: Heartbeat IC version 3.0 Mar 7 01:14:32.349304 kernel: scsi host0: storvsc_host_t Mar 7 01:14:32.349334 kernel: hv_utils: Shutdown IC version 3.2 Mar 7 01:14:32.356220 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 7 01:14:32.356281 kernel: hv_utils: TimeSync IC version 4.0 Mar 7 01:14:32.396390 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Mar 7 01:14:32.398816 systemd-resolved[214]: Clock change detected. Flushing caches. Mar 7 01:14:32.406278 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:14:32.425268 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 7 01:14:32.425561 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 7 01:14:32.427130 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 7 01:14:32.441266 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 7 01:14:32.441495 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 7 01:14:32.441626 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 7 01:14:32.447634 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 7 01:14:32.447873 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 7 01:14:32.448405 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 7 01:14:32.462757 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:14:32.462815 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 7 01:14:32.471510 kernel: hv_netvsc 6045bde0-1ac9-6045-bde0-1ac96045bde0 eth0: VF slot 1 added Mar 7 01:14:32.486207 kernel: hv_vmbus: registering driver hv_pci Mar 7 01:14:32.494717 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#74 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 7 01:14:32.494962 kernel: hv_pci e1f886c8-bd75-4b64-9a61-e55e5fa7135f: PCI VMBus probing: Using version 0x10004 Mar 7 01:14:32.503018 kernel: hv_pci e1f886c8-bd75-4b64-9a61-e55e5fa7135f: PCI host bridge to bus bd75:00 Mar 7 01:14:32.503331 kernel: pci_bus bd75:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Mar 7 01:14:32.504122 kernel: pci_bus bd75:00: No busn resource found for root bus, will use [bus 00-ff] Mar 7 01:14:32.512353 kernel: pci bd75:00:02.0: [15b3:1016] type 00 class 0x020000 Mar 7 01:14:32.519421 kernel: pci bd75:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Mar 7 01:14:32.519467 kernel: pci bd75:00:02.0: enabling Extended Tags Mar 7 01:14:32.534166 kernel: pci bd75:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at bd75:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Mar 7 01:14:32.541783 kernel: pci_bus bd75:00: busn_res: [bus 00-ff] end is updated to 00 Mar 7 01:14:32.542130 kernel: pci bd75:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Mar 7 01:14:32.711149 kernel: mlx5_core bd75:00:02.0: enabling device (0000 -> 0002) Mar 7 01:14:32.716126 kernel: mlx5_core bd75:00:02.0: firmware version: 14.30.5026 Mar 7 01:14:32.930578 kernel: hv_netvsc 6045bde0-1ac9-6045-bde0-1ac96045bde0 eth0: VF registering: eth1 Mar 7 01:14:32.930986 kernel: mlx5_core bd75:00:02.0 eth1: joined to eth0 Mar 7 01:14:32.935020 kernel: mlx5_core bd75:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Mar 7 01:14:32.946127 kernel: mlx5_core bd75:00:02.0 enP48501s1: renamed from eth1 Mar 7 01:14:33.114573 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (457) Mar 7 01:14:33.124957 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 7 01:14:33.138948 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 7 01:14:33.154167 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 7 01:14:33.223127 kernel: BTRFS: device fsid 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (476) Mar 7 01:14:33.236885 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 7 01:14:33.240725 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 7 01:14:33.260358 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 01:14:33.279133 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:14:33.288196 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:14:33.296146 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:14:34.300516 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:14:34.300578 disk-uuid[610]: The operation has completed successfully. Mar 7 01:14:34.388591 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 01:14:34.388708 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 01:14:34.415253 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 01:14:34.422366 sh[723]: Success Mar 7 01:14:34.457130 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 7 01:14:34.892685 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 01:14:34.908227 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 01:14:34.916861 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 01:14:34.954121 kernel: BTRFS info (device dm-0): first mount of filesystem 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 Mar 7 01:14:34.954172 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:14:34.960898 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 7 01:14:34.964155 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 7 01:14:34.967020 kernel: BTRFS info (device dm-0): using free space tree Mar 7 01:14:35.433231 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 01:14:35.437077 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 01:14:35.445331 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 01:14:35.451233 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 01:14:35.480129 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:14:35.480184 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:14:35.484767 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:14:35.536984 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:14:35.551126 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:14:35.552667 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:14:35.567384 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 7 01:14:35.573569 kernel: BTRFS info (device sda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:14:35.581261 systemd-networkd[897]: lo: Link UP Mar 7 01:14:35.581575 systemd-networkd[897]: lo: Gained carrier Mar 7 01:14:35.584083 systemd-networkd[897]: Enumeration completed Mar 7 01:14:35.584990 systemd-networkd[897]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:14:35.584996 systemd-networkd[897]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:14:35.585628 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:14:35.588616 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 01:14:35.589316 systemd[1]: Reached target network.target - Network. Mar 7 01:14:35.615398 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 01:14:35.656132 kernel: mlx5_core bd75:00:02.0 enP48501s1: Link up Mar 7 01:14:35.699129 kernel: hv_netvsc 6045bde0-1ac9-6045-bde0-1ac96045bde0 eth0: Data path switched to VF: enP48501s1 Mar 7 01:14:35.699561 systemd-networkd[897]: enP48501s1: Link UP Mar 7 01:14:35.699713 systemd-networkd[897]: eth0: Link UP Mar 7 01:14:35.699920 systemd-networkd[897]: eth0: Gained carrier Mar 7 01:14:35.699934 systemd-networkd[897]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:14:35.713929 systemd-networkd[897]: enP48501s1: Gained carrier Mar 7 01:14:35.754158 systemd-networkd[897]: eth0: DHCPv4 address 10.200.8.30/24, gateway 10.200.8.1 acquired from 168.63.129.16 Mar 7 01:14:37.016600 ignition[908]: Ignition 2.19.0 Mar 7 01:14:37.016613 ignition[908]: Stage: fetch-offline Mar 7 01:14:37.020547 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:14:37.016656 ignition[908]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:14:37.016666 ignition[908]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:14:37.016784 ignition[908]: parsed url from cmdline: "" Mar 7 01:14:37.016788 ignition[908]: no config URL provided Mar 7 01:14:37.016795 ignition[908]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:14:37.037442 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 7 01:14:37.016806 ignition[908]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:14:37.016812 ignition[908]: failed to fetch config: resource requires networking Mar 7 01:14:37.017174 ignition[908]: Ignition finished successfully Mar 7 01:14:37.055810 ignition[915]: Ignition 2.19.0 Mar 7 01:14:37.055818 ignition[915]: Stage: fetch Mar 7 01:14:37.056033 ignition[915]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:14:37.056049 ignition[915]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:14:37.056170 ignition[915]: parsed url from cmdline: "" Mar 7 01:14:37.056176 ignition[915]: no config URL provided Mar 7 01:14:37.056182 ignition[915]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:14:37.056189 ignition[915]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:14:37.056218 ignition[915]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 7 01:14:37.151457 ignition[915]: GET result: OK Mar 7 01:14:37.151565 ignition[915]: config has been read from IMDS userdata Mar 7 01:14:37.151594 ignition[915]: parsing config with SHA512: 05b7404b80f11babb9aaf0caef5bf7d854e410f0b618969aca32cc9f78b33f24810f56b29b73ec9adf49d0675229512f452f809fc088a42762cdc9438e472305 Mar 7 01:14:37.155690 unknown[915]: fetched base config from "system" Mar 7 01:14:37.156150 ignition[915]: fetch: fetch complete Mar 7 01:14:37.155697 unknown[915]: fetched base config from "system" Mar 7 01:14:37.156157 ignition[915]: fetch: fetch passed Mar 7 01:14:37.155701 unknown[915]: fetched user config from "azure" Mar 7 01:14:37.156209 ignition[915]: Ignition finished successfully Mar 7 01:14:37.169719 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 7 01:14:37.182256 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 01:14:37.203881 ignition[922]: Ignition 2.19.0 Mar 7 01:14:37.203894 ignition[922]: Stage: kargs Mar 7 01:14:37.204134 ignition[922]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:14:37.208368 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 01:14:37.204148 ignition[922]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:14:37.205054 ignition[922]: kargs: kargs passed Mar 7 01:14:37.205100 ignition[922]: Ignition finished successfully Mar 7 01:14:37.224372 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 01:14:37.244433 ignition[928]: Ignition 2.19.0 Mar 7 01:14:37.244447 ignition[928]: Stage: disks Mar 7 01:14:37.246967 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 01:14:37.244664 ignition[928]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:14:37.250560 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 01:14:37.244678 ignition[928]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:14:37.255137 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 01:14:37.245949 ignition[928]: disks: disks passed Mar 7 01:14:37.258834 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:14:37.245994 ignition[928]: Ignition finished successfully Mar 7 01:14:37.264600 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:14:37.272972 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:14:37.291387 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 01:14:37.401471 systemd-fsck[937]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 7 01:14:37.406369 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 01:14:37.417292 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 01:14:37.514124 kernel: EXT4-fs (sda9): mounted filesystem aab0506b-de72-4dd2-9393-24d7958f49a5 r/w with ordered data mode. Quota mode: none. Mar 7 01:14:37.514292 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 01:14:37.515095 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 01:14:37.574246 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:14:37.594129 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (948) Mar 7 01:14:37.601943 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:14:37.602034 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:14:37.604670 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:14:37.612123 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:14:37.630219 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 01:14:37.634971 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 7 01:14:37.644027 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 01:14:37.644073 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:14:37.647010 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:14:37.662499 systemd-networkd[897]: eth0: Gained IPv6LL Mar 7 01:14:37.664761 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 01:14:37.677867 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 01:14:38.661939 coreos-metadata[965]: Mar 07 01:14:38.661 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 7 01:14:38.668036 coreos-metadata[965]: Mar 07 01:14:38.667 INFO Fetch successful Mar 7 01:14:38.668036 coreos-metadata[965]: Mar 07 01:14:38.668 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 7 01:14:38.685862 coreos-metadata[965]: Mar 07 01:14:38.685 INFO Fetch successful Mar 7 01:14:38.737800 coreos-metadata[965]: Mar 07 01:14:38.737 INFO wrote hostname ci-4081.3.6-n-1070eafa86 to /sysroot/etc/hostname Mar 7 01:14:38.742759 initrd-setup-root[977]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 01:14:38.747342 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 7 01:14:38.776967 initrd-setup-root[985]: cut: /sysroot/etc/group: No such file or directory Mar 7 01:14:38.808067 initrd-setup-root[992]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 01:14:38.813831 initrd-setup-root[999]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 01:14:39.895053 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 01:14:39.906204 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 01:14:39.915289 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 01:14:39.922190 kernel: BTRFS info (device sda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:14:39.925536 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 01:14:39.956656 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 01:14:39.963040 ignition[1067]: INFO : Ignition 2.19.0 Mar 7 01:14:39.963040 ignition[1067]: INFO : Stage: mount Mar 7 01:14:39.963040 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:14:39.963040 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:14:39.978479 ignition[1067]: INFO : mount: mount passed Mar 7 01:14:39.978479 ignition[1067]: INFO : Ignition finished successfully Mar 7 01:14:39.966670 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 01:14:39.988552 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 01:14:39.995862 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:14:40.028131 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1079) Mar 7 01:14:40.033119 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:14:40.033167 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:14:40.038277 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:14:40.046311 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:14:40.047774 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:14:40.078699 ignition[1096]: INFO : Ignition 2.19.0 Mar 7 01:14:40.078699 ignition[1096]: INFO : Stage: files Mar 7 01:14:40.087095 ignition[1096]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:14:40.087095 ignition[1096]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:14:40.087095 ignition[1096]: DEBUG : files: compiled without relabeling support, skipping Mar 7 01:14:40.113178 ignition[1096]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 01:14:40.113178 ignition[1096]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 01:14:40.287538 ignition[1096]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 01:14:40.291955 ignition[1096]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 01:14:40.291955 ignition[1096]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 01:14:40.288018 unknown[1096]: wrote ssh authorized keys file for user: core Mar 7 01:14:40.302919 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:14:40.302919 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 7 01:14:40.330645 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 7 01:14:40.387798 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:14:40.393984 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 7 01:14:40.393984 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 01:14:40.393984 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:14:40.393984 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:14:40.393984 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:14:40.393984 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:14:40.393984 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:14:40.393984 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:14:40.393984 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:14:40.393984 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:14:40.393984 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 01:14:40.393984 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 01:14:40.393984 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 01:14:40.393984 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 7 01:14:40.954731 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 7 01:14:42.381030 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 01:14:42.381030 ignition[1096]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 7 01:14:42.408464 ignition[1096]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:14:42.414396 ignition[1096]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:14:42.414396 ignition[1096]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 7 01:14:42.414396 ignition[1096]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 7 01:14:42.414396 ignition[1096]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 01:14:42.414396 ignition[1096]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:14:42.414396 ignition[1096]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:14:42.414396 ignition[1096]: INFO : files: files passed Mar 7 01:14:42.414396 ignition[1096]: INFO : Ignition finished successfully Mar 7 01:14:42.421424 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 01:14:42.444402 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 01:14:42.455281 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 01:14:42.468574 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 01:14:42.471136 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 01:14:42.483509 initrd-setup-root-after-ignition[1124]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:14:42.483509 initrd-setup-root-after-ignition[1124]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:14:42.488721 initrd-setup-root-after-ignition[1128]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:14:42.487747 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:14:42.504909 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 01:14:42.519282 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 01:14:42.550070 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 01:14:42.550233 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 01:14:42.553965 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 01:14:42.554913 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 01:14:42.571613 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 01:14:42.583317 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 01:14:42.597564 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:14:42.609372 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 01:14:42.623480 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:14:42.627383 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:14:42.634173 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 01:14:42.639953 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 01:14:42.640094 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:14:42.652362 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 01:14:42.655902 systemd[1]: Stopped target basic.target - Basic System. Mar 7 01:14:42.661430 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 01:14:42.664868 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:14:42.670895 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 01:14:42.674354 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 01:14:42.681077 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:14:42.691515 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 01:14:42.697857 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 01:14:42.704371 systemd[1]: Stopped target swap.target - Swaps. Mar 7 01:14:42.714761 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 01:14:42.714908 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:14:42.718784 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:14:42.724004 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:14:42.727586 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 01:14:42.730298 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:14:42.734157 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 01:14:42.737163 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 01:14:42.750223 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 01:14:42.750368 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:14:42.756775 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 01:14:42.756886 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 01:14:42.774886 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 7 01:14:42.778191 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 7 01:14:42.796541 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 01:14:42.803035 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 01:14:42.805557 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 01:14:42.805749 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:14:42.809436 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 01:14:42.809546 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:14:42.831124 ignition[1148]: INFO : Ignition 2.19.0 Mar 7 01:14:42.831124 ignition[1148]: INFO : Stage: umount Mar 7 01:14:42.831124 ignition[1148]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:14:42.831124 ignition[1148]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:14:42.854532 ignition[1148]: INFO : umount: umount passed Mar 7 01:14:42.854532 ignition[1148]: INFO : Ignition finished successfully Mar 7 01:14:42.833620 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 01:14:42.833739 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 01:14:42.840494 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 01:14:42.840588 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 01:14:42.851041 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 01:14:42.851095 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 01:14:42.858861 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 01:14:42.858927 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 01:14:42.867316 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 7 01:14:42.867381 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 7 01:14:42.873043 systemd[1]: Stopped target network.target - Network. Mar 7 01:14:42.879803 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 01:14:42.879889 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:14:42.882377 systemd[1]: Stopped target paths.target - Path Units. Mar 7 01:14:42.882819 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 01:14:42.898397 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:14:42.906890 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 01:14:42.909682 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 01:14:42.912456 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 01:14:42.912514 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:14:42.926521 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 01:14:42.926585 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:14:42.932011 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 01:14:42.932080 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 01:14:42.938078 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 01:14:42.938149 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 01:14:42.941628 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 01:14:42.951734 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 01:14:42.956119 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 01:14:42.958155 systemd-networkd[897]: eth0: DHCPv6 lease lost Mar 7 01:14:42.960409 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 01:14:42.960521 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 01:14:42.969050 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 01:14:42.969118 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:14:42.993494 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 01:14:43.010628 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 01:14:43.010701 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:14:43.014454 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:14:43.018307 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 01:14:43.018414 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 01:14:43.048516 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 01:14:43.048654 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:14:43.052343 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 01:14:43.052389 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 01:14:43.052492 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 01:14:43.052529 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:14:43.056901 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 01:14:43.057043 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:14:43.058658 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 01:14:43.058736 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 01:14:43.076994 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 01:14:43.077030 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:14:43.120282 kernel: hv_netvsc 6045bde0-1ac9-6045-bde0-1ac96045bde0 eth0: Data path switched from VF: enP48501s1 Mar 7 01:14:43.080157 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 01:14:43.080202 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:14:43.083423 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 01:14:43.083470 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 01:14:43.089735 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:14:43.089786 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:14:43.119282 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 01:14:43.129802 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 01:14:43.129882 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:14:43.154835 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 7 01:14:43.155032 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:14:43.161988 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 01:14:43.162043 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:14:43.172797 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:14:43.176259 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:14:43.186228 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 01:14:43.188879 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 01:14:43.194650 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 01:14:43.197885 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 01:14:43.391293 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 01:14:43.391419 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 01:14:43.397304 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 01:14:43.402555 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 01:14:43.402621 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 01:14:43.415307 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 01:14:43.843063 systemd[1]: Switching root. Mar 7 01:14:43.885138 systemd-journald[177]: Journal stopped Mar 7 01:14:46.149354 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Mar 7 01:14:46.149394 kernel: SELinux: policy capability network_peer_controls=1 Mar 7 01:14:46.149415 kernel: SELinux: policy capability open_perms=1 Mar 7 01:14:46.149428 kernel: SELinux: policy capability extended_socket_class=1 Mar 7 01:14:46.149440 kernel: SELinux: policy capability always_check_network=0 Mar 7 01:14:46.149453 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 7 01:14:46.149469 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 7 01:14:46.149483 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 7 01:14:46.149499 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 7 01:14:46.149513 kernel: audit: type=1403 audit(1772846084.262:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 7 01:14:46.149528 systemd[1]: Successfully loaded SELinux policy in 67.299ms. Mar 7 01:14:46.149543 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.809ms. Mar 7 01:14:46.149560 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:14:46.149575 systemd[1]: Detected virtualization microsoft. Mar 7 01:14:46.149596 systemd[1]: Detected architecture x86-64. Mar 7 01:14:46.149613 systemd[1]: Detected first boot. Mar 7 01:14:46.149629 systemd[1]: Hostname set to . Mar 7 01:14:46.149644 systemd[1]: Initializing machine ID from random generator. Mar 7 01:14:46.149660 zram_generator::config[1191]: No configuration found. Mar 7 01:14:46.149681 systemd[1]: Populated /etc with preset unit settings. Mar 7 01:14:46.149699 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 7 01:14:46.149715 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 7 01:14:46.149734 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 7 01:14:46.149751 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 7 01:14:46.149769 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 7 01:14:46.149788 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 7 01:14:46.149809 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 7 01:14:46.149829 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 7 01:14:46.149848 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 7 01:14:46.149865 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 7 01:14:46.149883 systemd[1]: Created slice user.slice - User and Session Slice. Mar 7 01:14:46.149901 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:14:46.149920 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:14:46.149937 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 7 01:14:46.149958 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 7 01:14:46.149977 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 7 01:14:46.149995 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:14:46.150013 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 7 01:14:46.150028 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:14:46.150043 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 7 01:14:46.150062 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 7 01:14:46.150077 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 7 01:14:46.150091 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 7 01:14:46.150141 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:14:46.150161 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:14:46.150179 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:14:46.150198 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:14:46.150216 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 7 01:14:46.150236 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 7 01:14:46.150254 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:14:46.150278 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:14:46.150298 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:14:46.150317 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 7 01:14:46.150336 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 7 01:14:46.150355 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 7 01:14:46.150377 systemd[1]: Mounting media.mount - External Media Directory... Mar 7 01:14:46.150394 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:14:46.150412 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 7 01:14:46.150428 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 7 01:14:46.150445 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 7 01:14:46.150463 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 7 01:14:46.150481 systemd[1]: Reached target machines.target - Containers. Mar 7 01:14:46.150498 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 7 01:14:46.150519 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:14:46.150537 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:14:46.150553 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 7 01:14:46.150569 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:14:46.150586 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:14:46.150602 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:14:46.150619 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 7 01:14:46.150636 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:14:46.150656 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 7 01:14:46.150673 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 7 01:14:46.150691 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 7 01:14:46.150708 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 7 01:14:46.150725 systemd[1]: Stopped systemd-fsck-usr.service. Mar 7 01:14:46.150742 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:14:46.150759 kernel: loop: module loaded Mar 7 01:14:46.150775 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:14:46.150792 kernel: fuse: init (API version 7.39) Mar 7 01:14:46.150811 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 7 01:14:46.150829 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 7 01:14:46.150874 systemd-journald[1290]: Collecting audit messages is disabled. Mar 7 01:14:46.150909 systemd-journald[1290]: Journal started Mar 7 01:14:46.150948 systemd-journald[1290]: Runtime Journal (/run/log/journal/62f5965809cf47519a2bb3d1b0dc1afb) is 8.0M, max 158.7M, 150.7M free. Mar 7 01:14:45.381735 systemd[1]: Queued start job for default target multi-user.target. Mar 7 01:14:45.564577 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 7 01:14:45.564972 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 7 01:14:46.166552 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:14:46.173749 systemd[1]: verity-setup.service: Deactivated successfully. Mar 7 01:14:46.173812 systemd[1]: Stopped verity-setup.service. Mar 7 01:14:46.185132 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:14:46.191125 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:14:46.194872 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 7 01:14:46.199346 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 7 01:14:46.203268 systemd[1]: Mounted media.mount - External Media Directory. Mar 7 01:14:46.206709 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 7 01:14:46.210371 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 7 01:14:46.214038 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 7 01:14:46.219594 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 7 01:14:46.223263 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:14:46.226900 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 7 01:14:46.227183 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 7 01:14:46.231122 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:14:46.232345 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:14:46.235919 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:14:46.236146 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:14:46.248723 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 7 01:14:46.248939 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 7 01:14:46.252807 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:14:46.252999 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:14:46.258164 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:14:46.262157 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 01:14:46.266406 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 7 01:14:46.271006 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:14:46.288525 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 7 01:14:46.308299 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 7 01:14:46.316172 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 7 01:14:46.321129 kernel: ACPI: bus type drm_connector registered Mar 7 01:14:46.321436 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 7 01:14:46.321484 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:14:46.326454 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 7 01:14:46.331066 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 7 01:14:46.338741 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 7 01:14:46.341487 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:14:46.353361 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 7 01:14:46.357733 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 7 01:14:46.360970 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:14:46.365225 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 7 01:14:46.368878 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:14:46.371448 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:14:46.380237 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 7 01:14:46.385606 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:14:46.391362 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 7 01:14:46.398298 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:14:46.400375 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:14:46.404091 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 7 01:14:46.407705 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 7 01:14:46.411581 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 7 01:14:46.435929 systemd-journald[1290]: Time spent on flushing to /var/log/journal/62f5965809cf47519a2bb3d1b0dc1afb is 25.896ms for 959 entries. Mar 7 01:14:46.435929 systemd-journald[1290]: System Journal (/var/log/journal/62f5965809cf47519a2bb3d1b0dc1afb) is 8.0M, max 2.6G, 2.6G free. Mar 7 01:14:46.481478 systemd-journald[1290]: Received client request to flush runtime journal. Mar 7 01:14:46.439916 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 7 01:14:46.446855 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 7 01:14:46.458489 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 7 01:14:46.466224 udevadm[1330]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 7 01:14:46.483430 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 7 01:14:46.493127 kernel: loop0: detected capacity change from 0 to 142488 Mar 7 01:14:46.501193 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:14:46.524249 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 7 01:14:46.524908 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 7 01:14:46.574898 systemd-tmpfiles[1329]: ACLs are not supported, ignoring. Mar 7 01:14:46.574927 systemd-tmpfiles[1329]: ACLs are not supported, ignoring. Mar 7 01:14:46.580587 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:14:46.594275 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 7 01:14:46.653088 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 7 01:14:46.665303 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:14:46.691241 systemd-tmpfiles[1347]: ACLs are not supported, ignoring. Mar 7 01:14:46.691659 systemd-tmpfiles[1347]: ACLs are not supported, ignoring. Mar 7 01:14:46.698681 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:14:47.272140 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 7 01:14:47.329137 kernel: loop1: detected capacity change from 0 to 219192 Mar 7 01:14:47.377534 kernel: loop2: detected capacity change from 0 to 140768 Mar 7 01:14:47.501130 kernel: loop3: detected capacity change from 0 to 31056 Mar 7 01:14:47.644956 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 7 01:14:47.654310 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:14:47.680412 systemd-udevd[1355]: Using default interface naming scheme 'v255'. Mar 7 01:14:47.985775 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:14:47.997167 kernel: loop4: detected capacity change from 0 to 142488 Mar 7 01:14:48.003308 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:14:48.037154 kernel: loop5: detected capacity change from 0 to 219192 Mar 7 01:14:48.074206 kernel: loop6: detected capacity change from 0 to 140768 Mar 7 01:14:48.105139 kernel: loop7: detected capacity change from 0 to 31056 Mar 7 01:14:48.120268 (sd-merge)[1364]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Mar 7 01:14:48.123138 (sd-merge)[1364]: Merged extensions into '/usr'. Mar 7 01:14:48.132239 systemd[1]: Reloading requested from client PID 1327 ('systemd-sysext') (unit systemd-sysext.service)... Mar 7 01:14:48.132258 systemd[1]: Reloading... Mar 7 01:14:48.185138 kernel: mousedev: PS/2 mouse device common for all mice Mar 7 01:14:48.204160 kernel: hv_vmbus: registering driver hv_balloon Mar 7 01:14:48.210134 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Mar 7 01:14:48.247203 kernel: hv_vmbus: registering driver hyperv_fb Mar 7 01:14:48.265144 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 7 01:14:48.293369 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Mar 7 01:14:48.303299 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Mar 7 01:14:48.313135 zram_generator::config[1424]: No configuration found. Mar 7 01:14:48.326517 kernel: Console: switching to colour dummy device 80x25 Mar 7 01:14:48.336609 kernel: Console: switching to colour frame buffer device 128x48 Mar 7 01:14:48.588467 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1375) Mar 7 01:14:48.707124 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Mar 7 01:14:48.733036 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:14:48.831193 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 7 01:14:48.831749 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 7 01:14:48.836348 systemd[1]: Reloading finished in 703 ms. Mar 7 01:14:48.863370 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 7 01:14:48.903313 systemd[1]: Starting ensure-sysext.service... Mar 7 01:14:48.909347 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 7 01:14:48.915139 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:14:48.922311 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 7 01:14:48.930600 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:14:48.935180 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 7 01:14:48.956499 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 7 01:14:48.960733 systemd[1]: Reloading requested from client PID 1517 ('systemctl') (unit ensure-sysext.service)... Mar 7 01:14:48.960756 systemd[1]: Reloading... Mar 7 01:14:49.015688 systemd-tmpfiles[1519]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 7 01:14:49.017733 systemd-tmpfiles[1519]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 7 01:14:49.025517 systemd-tmpfiles[1519]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 7 01:14:49.025944 systemd-tmpfiles[1519]: ACLs are not supported, ignoring. Mar 7 01:14:49.026029 systemd-tmpfiles[1519]: ACLs are not supported, ignoring. Mar 7 01:14:49.059035 systemd-tmpfiles[1519]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:14:49.059056 systemd-tmpfiles[1519]: Skipping /boot Mar 7 01:14:49.069917 zram_generator::config[1557]: No configuration found. Mar 7 01:14:49.085944 systemd-tmpfiles[1519]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:14:49.085958 systemd-tmpfiles[1519]: Skipping /boot Mar 7 01:14:49.192029 systemd-networkd[1367]: lo: Link UP Mar 7 01:14:49.192041 systemd-networkd[1367]: lo: Gained carrier Mar 7 01:14:49.196292 systemd-networkd[1367]: Enumeration completed Mar 7 01:14:49.197082 systemd-networkd[1367]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:14:49.197210 systemd-networkd[1367]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:14:49.251125 kernel: mlx5_core bd75:00:02.0 enP48501s1: Link up Mar 7 01:14:49.266651 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:14:49.272183 kernel: hv_netvsc 6045bde0-1ac9-6045-bde0-1ac96045bde0 eth0: Data path switched to VF: enP48501s1 Mar 7 01:14:49.272748 systemd-networkd[1367]: enP48501s1: Link UP Mar 7 01:14:49.272913 systemd-networkd[1367]: eth0: Link UP Mar 7 01:14:49.272918 systemd-networkd[1367]: eth0: Gained carrier Mar 7 01:14:49.272942 systemd-networkd[1367]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:14:49.276678 systemd-networkd[1367]: enP48501s1: Gained carrier Mar 7 01:14:49.295139 lvm[1523]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:14:49.301254 systemd-networkd[1367]: eth0: DHCPv4 address 10.200.8.30/24, gateway 10.200.8.1 acquired from 168.63.129.16 Mar 7 01:14:49.380144 systemd[1]: Reloading finished in 418 ms. Mar 7 01:14:49.396397 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 7 01:14:49.400913 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:14:49.407594 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 7 01:14:49.412646 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:14:49.417501 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:14:49.421681 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 7 01:14:49.432090 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:14:49.444371 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:14:49.452282 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 7 01:14:49.459421 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 7 01:14:49.468046 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 7 01:14:49.473449 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 7 01:14:49.483306 lvm[1630]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:14:49.489974 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:14:49.512532 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 7 01:14:49.524748 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:14:49.525017 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:14:49.533503 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:14:49.546462 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:14:49.553697 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:14:49.558414 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:14:49.558608 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:14:49.559867 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 7 01:14:49.565308 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:14:49.565503 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:14:49.576393 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:14:49.576525 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:14:49.582382 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 7 01:14:49.587547 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:14:49.587683 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:14:49.603730 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:14:49.604006 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:14:49.606887 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 7 01:14:49.614307 augenrules[1656]: No rules Mar 7 01:14:49.615029 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:14:49.623868 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:14:49.624371 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:14:49.632245 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:14:49.641206 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:14:49.653408 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:14:49.657011 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:14:49.657341 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:14:49.658975 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:14:49.659208 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:14:49.663372 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:14:49.663557 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:14:49.677807 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:14:49.677941 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:14:49.682726 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:14:49.683340 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:14:49.692332 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:14:49.697067 systemd-resolved[1636]: Positive Trust Anchors: Mar 7 01:14:49.697098 systemd-resolved[1636]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:14:49.697162 systemd-resolved[1636]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:14:49.697941 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:14:49.708396 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:14:49.711979 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:14:49.712270 systemd[1]: Reached target time-set.target - System Time Set. Mar 7 01:14:49.715388 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:14:49.715513 systemd-resolved[1636]: Using system hostname 'ci-4081.3.6-n-1070eafa86'. Mar 7 01:14:49.716847 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:14:49.717060 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:14:49.728275 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:14:49.732438 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:14:49.732618 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:14:49.736333 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:14:49.736517 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:14:49.742586 systemd[1]: Finished ensure-sysext.service. Mar 7 01:14:49.747921 systemd[1]: Reached target network.target - Network. Mar 7 01:14:49.750750 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:14:49.754235 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:14:49.754276 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:14:50.240562 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 7 01:14:50.245144 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 7 01:14:50.766273 systemd-networkd[1367]: eth0: Gained IPv6LL Mar 7 01:14:50.769122 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 7 01:14:50.776499 systemd[1]: Reached target network-online.target - Network is Online. Mar 7 01:14:55.657613 ldconfig[1322]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 7 01:14:55.672001 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 7 01:14:55.681347 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 7 01:14:55.693641 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 7 01:14:55.697593 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:14:55.700875 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 7 01:14:55.704495 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 7 01:14:55.708445 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 7 01:14:55.711716 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 7 01:14:55.715790 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 7 01:14:55.719674 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 7 01:14:55.719733 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:14:55.722429 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:14:55.725718 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 7 01:14:55.730438 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 7 01:14:55.745142 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 7 01:14:55.749007 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 7 01:14:55.752840 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:14:55.755735 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:14:55.758499 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:14:55.758524 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:14:55.768201 systemd[1]: Starting chronyd.service - NTP client/server... Mar 7 01:14:55.773233 systemd[1]: Starting containerd.service - containerd container runtime... Mar 7 01:14:55.787983 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 7 01:14:55.792911 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 7 01:14:55.799293 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 7 01:14:55.814298 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 7 01:14:55.817649 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 7 01:14:55.817712 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Mar 7 01:14:55.820983 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Mar 7 01:14:55.824220 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Mar 7 01:14:55.828316 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:14:55.835271 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 7 01:14:55.844251 (chronyd)[1684]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Mar 7 01:14:55.845312 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 7 01:14:55.857217 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 7 01:14:55.863563 jq[1688]: false Mar 7 01:14:55.863627 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 7 01:14:55.869281 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 7 01:14:55.878925 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 7 01:14:55.883286 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 7 01:14:55.883893 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 7 01:14:55.886315 systemd[1]: Starting update-engine.service - Update Engine... Mar 7 01:14:55.892558 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 7 01:14:55.913039 KVP[1692]: KVP starting; pid is:1692 Mar 7 01:14:55.900500 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 7 01:14:55.900725 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 7 01:14:55.909550 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 7 01:14:55.909790 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 7 01:14:55.932537 jq[1704]: true Mar 7 01:14:55.946059 kernel: hv_utils: KVP IC version 4.0 Mar 7 01:14:55.943441 KVP[1692]: KVP LIC Version: 3.1 Mar 7 01:14:55.944032 chronyd[1722]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Mar 7 01:14:55.958091 (ntainerd)[1718]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 7 01:14:55.974086 extend-filesystems[1691]: Found loop4 Mar 7 01:14:55.974086 extend-filesystems[1691]: Found loop5 Mar 7 01:14:55.974086 extend-filesystems[1691]: Found loop6 Mar 7 01:14:55.974086 extend-filesystems[1691]: Found loop7 Mar 7 01:14:55.974086 extend-filesystems[1691]: Found sda Mar 7 01:14:55.974086 extend-filesystems[1691]: Found sda1 Mar 7 01:14:55.974086 extend-filesystems[1691]: Found sda2 Mar 7 01:14:55.974086 extend-filesystems[1691]: Found sda3 Mar 7 01:14:55.974086 extend-filesystems[1691]: Found usr Mar 7 01:14:55.974086 extend-filesystems[1691]: Found sda4 Mar 7 01:14:55.974086 extend-filesystems[1691]: Found sda6 Mar 7 01:14:55.974086 extend-filesystems[1691]: Found sda7 Mar 7 01:14:55.974086 extend-filesystems[1691]: Found sda9 Mar 7 01:14:55.974086 extend-filesystems[1691]: Checking size of /dev/sda9 Mar 7 01:14:56.057130 update_engine[1703]: I20260307 01:14:56.018049 1703 main.cc:92] Flatcar Update Engine starting Mar 7 01:14:55.975366 systemd[1]: motdgen.service: Deactivated successfully. Mar 7 01:14:55.996779 chronyd[1722]: Timezone right/UTC failed leap second check, ignoring Mar 7 01:14:56.061411 jq[1719]: true Mar 7 01:14:56.061508 extend-filesystems[1691]: Old size kept for /dev/sda9 Mar 7 01:14:56.061508 extend-filesystems[1691]: Found sr0 Mar 7 01:14:55.976194 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 7 01:14:55.997021 chronyd[1722]: Loaded seccomp filter (level 2) Mar 7 01:14:56.001088 systemd[1]: Started chronyd.service - NTP client/server. Mar 7 01:14:56.046921 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 7 01:14:56.047173 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 7 01:14:56.068019 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 7 01:14:56.073045 dbus-daemon[1687]: [system] SELinux support is enabled Mar 7 01:14:56.075796 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 7 01:14:56.077410 update_engine[1703]: I20260307 01:14:56.077361 1703 update_check_scheduler.cc:74] Next update check in 3m40s Mar 7 01:14:56.090364 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 7 01:14:56.090453 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 7 01:14:56.092950 tar[1707]: linux-amd64/LICENSE Mar 7 01:14:56.092950 tar[1707]: linux-amd64/helm Mar 7 01:14:56.100218 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 7 01:14:56.100257 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 7 01:14:56.106835 systemd[1]: Started update-engine.service - Update Engine. Mar 7 01:14:56.122300 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 7 01:14:56.164531 systemd-logind[1702]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 7 01:14:56.168495 systemd-logind[1702]: New seat seat0. Mar 7 01:14:56.170165 coreos-metadata[1686]: Mar 07 01:14:56.170 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 7 01:14:56.172444 systemd[1]: Started systemd-logind.service - User Login Management. Mar 7 01:14:56.177739 coreos-metadata[1686]: Mar 07 01:14:56.177 INFO Fetch successful Mar 7 01:14:56.177739 coreos-metadata[1686]: Mar 07 01:14:56.177 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Mar 7 01:14:56.186060 coreos-metadata[1686]: Mar 07 01:14:56.183 INFO Fetch successful Mar 7 01:14:56.186060 coreos-metadata[1686]: Mar 07 01:14:56.183 INFO Fetching http://168.63.129.16/machine/378636e4-46cb-4428-bcd5-6cd89b7f57d0/6ad6f9fa%2D2ed7%2D40c8%2Dbbfe%2D92864d657ecf.%5Fci%2D4081.3.6%2Dn%2D1070eafa86?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Mar 7 01:14:56.188122 coreos-metadata[1686]: Mar 07 01:14:56.186 INFO Fetch successful Mar 7 01:14:56.188122 coreos-metadata[1686]: Mar 07 01:14:56.186 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Mar 7 01:14:56.195524 coreos-metadata[1686]: Mar 07 01:14:56.195 INFO Fetch successful Mar 7 01:14:56.268743 bash[1760]: Updated "/home/core/.ssh/authorized_keys" Mar 7 01:14:56.273643 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 7 01:14:56.284414 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 7 01:14:56.292484 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 7 01:14:56.293472 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 7 01:14:56.321283 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1768) Mar 7 01:14:56.479261 sshd_keygen[1741]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 7 01:14:56.546144 locksmithd[1759]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 7 01:14:56.575532 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 7 01:14:56.588229 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 7 01:14:56.597625 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Mar 7 01:14:56.624530 systemd[1]: issuegen.service: Deactivated successfully. Mar 7 01:14:56.624746 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 7 01:14:56.645532 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 7 01:14:56.696519 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Mar 7 01:14:56.702808 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 7 01:14:56.716476 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 7 01:14:56.725437 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 7 01:14:56.732066 systemd[1]: Reached target getty.target - Login Prompts. Mar 7 01:14:57.074471 tar[1707]: linux-amd64/README.md Mar 7 01:14:57.087904 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 7 01:14:57.390415 containerd[1718]: time="2026-03-07T01:14:57.389786100Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 7 01:14:57.420722 containerd[1718]: time="2026-03-07T01:14:57.420521200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:14:57.422668 containerd[1718]: time="2026-03-07T01:14:57.422628300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:14:57.424038 containerd[1718]: time="2026-03-07T01:14:57.422786000Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 7 01:14:57.424038 containerd[1718]: time="2026-03-07T01:14:57.422813200Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 7 01:14:57.424038 containerd[1718]: time="2026-03-07T01:14:57.422965900Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 7 01:14:57.424038 containerd[1718]: time="2026-03-07T01:14:57.422985800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 7 01:14:57.424038 containerd[1718]: time="2026-03-07T01:14:57.423054900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:14:57.424038 containerd[1718]: time="2026-03-07T01:14:57.423096000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:14:57.424038 containerd[1718]: time="2026-03-07T01:14:57.423345500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:14:57.424038 containerd[1718]: time="2026-03-07T01:14:57.423365800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 7 01:14:57.424038 containerd[1718]: time="2026-03-07T01:14:57.423382900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:14:57.424038 containerd[1718]: time="2026-03-07T01:14:57.423396500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 7 01:14:57.424038 containerd[1718]: time="2026-03-07T01:14:57.423490800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:14:57.424038 containerd[1718]: time="2026-03-07T01:14:57.423657700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:14:57.424489 containerd[1718]: time="2026-03-07T01:14:57.423753500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:14:57.424489 containerd[1718]: time="2026-03-07T01:14:57.423764900Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 7 01:14:57.424489 containerd[1718]: time="2026-03-07T01:14:57.423866600Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 7 01:14:57.424489 containerd[1718]: time="2026-03-07T01:14:57.423919400Z" level=info msg="metadata content store policy set" policy=shared Mar 7 01:14:57.442389 containerd[1718]: time="2026-03-07T01:14:57.442155000Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 7 01:14:57.442389 containerd[1718]: time="2026-03-07T01:14:57.442219000Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 7 01:14:57.442389 containerd[1718]: time="2026-03-07T01:14:57.442241300Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 7 01:14:57.442389 containerd[1718]: time="2026-03-07T01:14:57.442261200Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 7 01:14:57.442389 containerd[1718]: time="2026-03-07T01:14:57.442281300Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 7 01:14:57.442631 containerd[1718]: time="2026-03-07T01:14:57.442436900Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 7 01:14:57.442752 containerd[1718]: time="2026-03-07T01:14:57.442727000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 7 01:14:57.442894 containerd[1718]: time="2026-03-07T01:14:57.442866600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 7 01:14:57.442952 containerd[1718]: time="2026-03-07T01:14:57.442895100Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 7 01:14:57.442952 containerd[1718]: time="2026-03-07T01:14:57.442913000Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 7 01:14:57.442952 containerd[1718]: time="2026-03-07T01:14:57.442937000Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 7 01:14:57.443047 containerd[1718]: time="2026-03-07T01:14:57.442955300Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 7 01:14:57.443047 containerd[1718]: time="2026-03-07T01:14:57.442973600Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 7 01:14:57.443047 containerd[1718]: time="2026-03-07T01:14:57.442994700Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 7 01:14:57.443047 containerd[1718]: time="2026-03-07T01:14:57.443013900Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 7 01:14:57.443047 containerd[1718]: time="2026-03-07T01:14:57.443033400Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 7 01:14:57.443228 containerd[1718]: time="2026-03-07T01:14:57.443051500Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 7 01:14:57.443228 containerd[1718]: time="2026-03-07T01:14:57.443068200Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 7 01:14:57.443228 containerd[1718]: time="2026-03-07T01:14:57.443094800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 7 01:14:57.443228 containerd[1718]: time="2026-03-07T01:14:57.443124600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 7 01:14:57.443228 containerd[1718]: time="2026-03-07T01:14:57.443142100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 7 01:14:57.443228 containerd[1718]: time="2026-03-07T01:14:57.443160300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 7 01:14:57.443228 containerd[1718]: time="2026-03-07T01:14:57.443177400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 7 01:14:57.443228 containerd[1718]: time="2026-03-07T01:14:57.443195700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 7 01:14:57.443228 containerd[1718]: time="2026-03-07T01:14:57.443211700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 7 01:14:57.443530 containerd[1718]: time="2026-03-07T01:14:57.443228400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 7 01:14:57.443530 containerd[1718]: time="2026-03-07T01:14:57.443245800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 7 01:14:57.443530 containerd[1718]: time="2026-03-07T01:14:57.443265500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 7 01:14:57.443530 containerd[1718]: time="2026-03-07T01:14:57.443281700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 7 01:14:57.443530 containerd[1718]: time="2026-03-07T01:14:57.443298000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 7 01:14:57.443530 containerd[1718]: time="2026-03-07T01:14:57.443315300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 7 01:14:57.443530 containerd[1718]: time="2026-03-07T01:14:57.443335700Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 7 01:14:57.443530 containerd[1718]: time="2026-03-07T01:14:57.443362200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 7 01:14:57.443530 containerd[1718]: time="2026-03-07T01:14:57.443378300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 7 01:14:57.443530 containerd[1718]: time="2026-03-07T01:14:57.443393700Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 7 01:14:57.443530 containerd[1718]: time="2026-03-07T01:14:57.443439500Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 7 01:14:57.443530 containerd[1718]: time="2026-03-07T01:14:57.443461800Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 7 01:14:57.443530 containerd[1718]: time="2026-03-07T01:14:57.443487900Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 7 01:14:57.443999 containerd[1718]: time="2026-03-07T01:14:57.443502900Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 7 01:14:57.443999 containerd[1718]: time="2026-03-07T01:14:57.443516200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 7 01:14:57.443999 containerd[1718]: time="2026-03-07T01:14:57.443540500Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 7 01:14:57.443999 containerd[1718]: time="2026-03-07T01:14:57.443556400Z" level=info msg="NRI interface is disabled by configuration." Mar 7 01:14:57.443999 containerd[1718]: time="2026-03-07T01:14:57.443569300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 7 01:14:57.447522 containerd[1718]: time="2026-03-07T01:14:57.446235800Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 7 01:14:57.447522 containerd[1718]: time="2026-03-07T01:14:57.446409300Z" level=info msg="Connect containerd service" Mar 7 01:14:57.447522 containerd[1718]: time="2026-03-07T01:14:57.446484500Z" level=info msg="using legacy CRI server" Mar 7 01:14:57.447522 containerd[1718]: time="2026-03-07T01:14:57.446496600Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 7 01:14:57.447522 containerd[1718]: time="2026-03-07T01:14:57.446956400Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 7 01:14:57.449479 containerd[1718]: time="2026-03-07T01:14:57.449445900Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 01:14:57.449875 containerd[1718]: time="2026-03-07T01:14:57.449816700Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 7 01:14:57.451916 containerd[1718]: time="2026-03-07T01:14:57.451278300Z" level=info msg="Start subscribing containerd event" Mar 7 01:14:57.451916 containerd[1718]: time="2026-03-07T01:14:57.451348700Z" level=info msg="Start recovering state" Mar 7 01:14:57.451916 containerd[1718]: time="2026-03-07T01:14:57.451429000Z" level=info msg="Start event monitor" Mar 7 01:14:57.451916 containerd[1718]: time="2026-03-07T01:14:57.451447200Z" level=info msg="Start snapshots syncer" Mar 7 01:14:57.451916 containerd[1718]: time="2026-03-07T01:14:57.451458200Z" level=info msg="Start cni network conf syncer for default" Mar 7 01:14:57.451916 containerd[1718]: time="2026-03-07T01:14:57.451470900Z" level=info msg="Start streaming server" Mar 7 01:14:57.451916 containerd[1718]: time="2026-03-07T01:14:57.451311700Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 7 01:14:57.451767 systemd[1]: Started containerd.service - containerd container runtime. Mar 7 01:14:57.452244 containerd[1718]: time="2026-03-07T01:14:57.452226300Z" level=info msg="containerd successfully booted in 0.063310s" Mar 7 01:14:57.535559 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:14:57.539701 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 7 01:14:57.543548 systemd[1]: Startup finished in 1.074s (kernel) + 13.446s (initrd) + 13.346s (userspace) = 27.867s. Mar 7 01:14:57.548705 (kubelet)[1848]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:14:57.985031 login[1831]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Mar 7 01:14:57.986172 login[1830]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 7 01:14:57.996204 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 7 01:14:58.003742 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 7 01:14:58.007836 systemd-logind[1702]: New session 2 of user core. Mar 7 01:14:58.045509 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 7 01:14:58.053461 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 7 01:14:58.065545 (systemd)[1859]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 7 01:14:58.191467 kubelet[1848]: E0307 01:14:58.191343 1848 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:14:58.195772 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:14:58.195985 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:14:58.231289 systemd[1859]: Queued start job for default target default.target. Mar 7 01:14:58.238156 systemd[1859]: Created slice app.slice - User Application Slice. Mar 7 01:14:58.238192 systemd[1859]: Reached target paths.target - Paths. Mar 7 01:14:58.238209 systemd[1859]: Reached target timers.target - Timers. Mar 7 01:14:58.239439 systemd[1859]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 7 01:14:58.250552 systemd[1859]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 7 01:14:58.250688 systemd[1859]: Reached target sockets.target - Sockets. Mar 7 01:14:58.250716 systemd[1859]: Reached target basic.target - Basic System. Mar 7 01:14:58.250760 systemd[1859]: Reached target default.target - Main User Target. Mar 7 01:14:58.250796 systemd[1859]: Startup finished in 177ms. Mar 7 01:14:58.250895 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 7 01:14:58.256319 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 7 01:14:58.889977 waagent[1828]: 2026-03-07T01:14:58.889865Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Mar 7 01:14:58.930234 waagent[1828]: 2026-03-07T01:14:58.890457Z INFO Daemon Daemon OS: flatcar 4081.3.6 Mar 7 01:14:58.930234 waagent[1828]: 2026-03-07T01:14:58.891570Z INFO Daemon Daemon Python: 3.11.9 Mar 7 01:14:58.930234 waagent[1828]: 2026-03-07T01:14:58.892375Z INFO Daemon Daemon Run daemon Mar 7 01:14:58.930234 waagent[1828]: 2026-03-07T01:14:58.893283Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Mar 7 01:14:58.930234 waagent[1828]: 2026-03-07T01:14:58.893679Z INFO Daemon Daemon Using waagent for provisioning Mar 7 01:14:58.930234 waagent[1828]: 2026-03-07T01:14:58.894928Z INFO Daemon Daemon Activate resource disk Mar 7 01:14:58.930234 waagent[1828]: 2026-03-07T01:14:58.895789Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Mar 7 01:14:58.930234 waagent[1828]: 2026-03-07T01:14:58.900677Z INFO Daemon Daemon Found device: None Mar 7 01:14:58.930234 waagent[1828]: 2026-03-07T01:14:58.901344Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Mar 7 01:14:58.930234 waagent[1828]: 2026-03-07T01:14:58.902276Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Mar 7 01:14:58.930234 waagent[1828]: 2026-03-07T01:14:58.904275Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 7 01:14:58.930234 waagent[1828]: 2026-03-07T01:14:58.904895Z INFO Daemon Daemon Running default provisioning handler Mar 7 01:14:58.935339 waagent[1828]: 2026-03-07T01:14:58.935252Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Mar 7 01:14:58.942897 waagent[1828]: 2026-03-07T01:14:58.942827Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 7 01:14:58.952535 waagent[1828]: 2026-03-07T01:14:58.943078Z INFO Daemon Daemon cloud-init is enabled: False Mar 7 01:14:58.952535 waagent[1828]: 2026-03-07T01:14:58.943582Z INFO Daemon Daemon Copying ovf-env.xml Mar 7 01:14:58.986862 login[1831]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 7 01:14:58.991678 systemd-logind[1702]: New session 1 of user core. Mar 7 01:14:59.001254 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 7 01:14:59.126057 waagent[1828]: 2026-03-07T01:14:59.125953Z INFO Daemon Daemon Successfully mounted dvd Mar 7 01:14:59.145402 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Mar 7 01:14:59.148624 waagent[1828]: 2026-03-07T01:14:59.148552Z INFO Daemon Daemon Detect protocol endpoint Mar 7 01:14:59.165072 waagent[1828]: 2026-03-07T01:14:59.148896Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 7 01:14:59.165072 waagent[1828]: 2026-03-07T01:14:59.150133Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Mar 7 01:14:59.165072 waagent[1828]: 2026-03-07T01:14:59.150597Z INFO Daemon Daemon Test for route to 168.63.129.16 Mar 7 01:14:59.165072 waagent[1828]: 2026-03-07T01:14:59.151222Z INFO Daemon Daemon Route to 168.63.129.16 exists Mar 7 01:14:59.165072 waagent[1828]: 2026-03-07T01:14:59.151573Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Mar 7 01:14:59.169257 waagent[1828]: 2026-03-07T01:14:59.169206Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Mar 7 01:14:59.178358 waagent[1828]: 2026-03-07T01:14:59.172680Z INFO Daemon Daemon Wire protocol version:2012-11-30 Mar 7 01:14:59.178358 waagent[1828]: 2026-03-07T01:14:59.172926Z INFO Daemon Daemon Server preferred version:2015-04-05 Mar 7 01:14:59.230037 waagent[1828]: 2026-03-07T01:14:59.229936Z INFO Daemon Daemon Initializing goal state during protocol detection Mar 7 01:14:59.233913 waagent[1828]: 2026-03-07T01:14:59.233757Z INFO Daemon Daemon Forcing an update of the goal state. Mar 7 01:14:59.238667 waagent[1828]: 2026-03-07T01:14:59.238610Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 7 01:14:59.255480 waagent[1828]: 2026-03-07T01:14:59.255423Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.179 Mar 7 01:14:59.267135 waagent[1828]: 2026-03-07T01:14:59.256144Z INFO Daemon Mar 7 01:14:59.267135 waagent[1828]: 2026-03-07T01:14:59.256351Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: fc282d23-3112-4f26-b5e0-293d6e108436 eTag: 14271283506345868745 source: Fabric] Mar 7 01:14:59.267135 waagent[1828]: 2026-03-07T01:14:59.257086Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Mar 7 01:14:59.267135 waagent[1828]: 2026-03-07T01:14:59.257812Z INFO Daemon Mar 7 01:14:59.267135 waagent[1828]: 2026-03-07T01:14:59.258757Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Mar 7 01:14:59.267135 waagent[1828]: 2026-03-07T01:14:59.263127Z INFO Daemon Daemon Downloading artifacts profile blob Mar 7 01:14:59.327480 waagent[1828]: 2026-03-07T01:14:59.327392Z INFO Daemon Downloaded certificate {'thumbprint': 'A602A846BED8BA9A66D1C6869A353E2AB219A007', 'hasPrivateKey': True} Mar 7 01:14:59.333258 waagent[1828]: 2026-03-07T01:14:59.333193Z INFO Daemon Fetch goal state completed Mar 7 01:14:59.340204 waagent[1828]: 2026-03-07T01:14:59.340145Z INFO Daemon Daemon Starting provisioning Mar 7 01:14:59.348161 waagent[1828]: 2026-03-07T01:14:59.340492Z INFO Daemon Daemon Handle ovf-env.xml. Mar 7 01:14:59.348161 waagent[1828]: 2026-03-07T01:14:59.341088Z INFO Daemon Daemon Set hostname [ci-4081.3.6-n-1070eafa86] Mar 7 01:14:59.370999 waagent[1828]: 2026-03-07T01:14:59.370922Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-n-1070eafa86] Mar 7 01:14:59.379646 waagent[1828]: 2026-03-07T01:14:59.371434Z INFO Daemon Daemon Examine /proc/net/route for primary interface Mar 7 01:14:59.379646 waagent[1828]: 2026-03-07T01:14:59.372448Z INFO Daemon Daemon Primary interface is [eth0] Mar 7 01:14:59.411705 systemd-networkd[1367]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:14:59.411717 systemd-networkd[1367]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:14:59.411768 systemd-networkd[1367]: eth0: DHCP lease lost Mar 7 01:14:59.413200 waagent[1828]: 2026-03-07T01:14:59.413077Z INFO Daemon Daemon Create user account if not exists Mar 7 01:14:59.416649 waagent[1828]: 2026-03-07T01:14:59.416573Z INFO Daemon Daemon User core already exists, skip useradd Mar 7 01:14:59.432397 waagent[1828]: 2026-03-07T01:14:59.416797Z INFO Daemon Daemon Configure sudoer Mar 7 01:14:59.432397 waagent[1828]: 2026-03-07T01:14:59.418171Z INFO Daemon Daemon Configure sshd Mar 7 01:14:59.432397 waagent[1828]: 2026-03-07T01:14:59.419065Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Mar 7 01:14:59.432397 waagent[1828]: 2026-03-07T01:14:59.419837Z INFO Daemon Daemon Deploy ssh public key. Mar 7 01:14:59.433217 systemd-networkd[1367]: eth0: DHCPv6 lease lost Mar 7 01:14:59.470167 systemd-networkd[1367]: eth0: DHCPv4 address 10.200.8.30/24, gateway 10.200.8.1 acquired from 168.63.129.16 Mar 7 01:15:00.524308 waagent[1828]: 2026-03-07T01:15:00.524244Z INFO Daemon Daemon Provisioning complete Mar 7 01:15:00.534482 waagent[1828]: 2026-03-07T01:15:00.534429Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Mar 7 01:15:00.542418 waagent[1828]: 2026-03-07T01:15:00.534722Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Mar 7 01:15:00.542418 waagent[1828]: 2026-03-07T01:15:00.535742Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Mar 7 01:15:00.667574 waagent[1910]: 2026-03-07T01:15:00.667466Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Mar 7 01:15:00.668020 waagent[1910]: 2026-03-07T01:15:00.667641Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Mar 7 01:15:00.668020 waagent[1910]: 2026-03-07T01:15:00.667721Z INFO ExtHandler ExtHandler Python: 3.11.9 Mar 7 01:15:00.735410 waagent[1910]: 2026-03-07T01:15:00.735312Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Mar 7 01:15:00.735637 waagent[1910]: 2026-03-07T01:15:00.735586Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 7 01:15:00.735732 waagent[1910]: 2026-03-07T01:15:00.735690Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 7 01:15:00.742713 waagent[1910]: 2026-03-07T01:15:00.742652Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 7 01:15:00.747911 waagent[1910]: 2026-03-07T01:15:00.747857Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.179 Mar 7 01:15:00.748387 waagent[1910]: 2026-03-07T01:15:00.748332Z INFO ExtHandler Mar 7 01:15:00.748461 waagent[1910]: 2026-03-07T01:15:00.748430Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 9a4ea89b-5f9c-4dde-b82f-c56fd2f36058 eTag: 14271283506345868745 source: Fabric] Mar 7 01:15:00.748784 waagent[1910]: 2026-03-07T01:15:00.748732Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 7 01:15:00.749368 waagent[1910]: 2026-03-07T01:15:00.749313Z INFO ExtHandler Mar 7 01:15:00.749434 waagent[1910]: 2026-03-07T01:15:00.749399Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Mar 7 01:15:00.752283 waagent[1910]: 2026-03-07T01:15:00.752244Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 7 01:15:00.810673 waagent[1910]: 2026-03-07T01:15:00.810523Z INFO ExtHandler Downloaded certificate {'thumbprint': 'A602A846BED8BA9A66D1C6869A353E2AB219A007', 'hasPrivateKey': True} Mar 7 01:15:00.811195 waagent[1910]: 2026-03-07T01:15:00.811139Z INFO ExtHandler Fetch goal state completed Mar 7 01:15:00.824191 waagent[1910]: 2026-03-07T01:15:00.824115Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1910 Mar 7 01:15:00.824366 waagent[1910]: 2026-03-07T01:15:00.824316Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Mar 7 01:15:00.825929 waagent[1910]: 2026-03-07T01:15:00.825870Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Mar 7 01:15:00.826316 waagent[1910]: 2026-03-07T01:15:00.826268Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 7 01:15:00.900610 waagent[1910]: 2026-03-07T01:15:00.900559Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 7 01:15:00.900845 waagent[1910]: 2026-03-07T01:15:00.900796Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 7 01:15:00.907496 waagent[1910]: 2026-03-07T01:15:00.907453Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 7 01:15:00.914494 systemd[1]: Reloading requested from client PID 1923 ('systemctl') (unit waagent.service)... Mar 7 01:15:00.914512 systemd[1]: Reloading... Mar 7 01:15:00.999131 zram_generator::config[1957]: No configuration found. Mar 7 01:15:01.138013 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:15:01.219507 systemd[1]: Reloading finished in 304 ms. Mar 7 01:15:01.248216 waagent[1910]: 2026-03-07T01:15:01.245875Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Mar 7 01:15:01.254947 systemd[1]: Reloading requested from client PID 2013 ('systemctl') (unit waagent.service)... Mar 7 01:15:01.254966 systemd[1]: Reloading... Mar 7 01:15:01.360147 zram_generator::config[2048]: No configuration found. Mar 7 01:15:01.486598 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:15:01.569449 systemd[1]: Reloading finished in 313 ms. Mar 7 01:15:01.599504 waagent[1910]: 2026-03-07T01:15:01.599400Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Mar 7 01:15:01.599657 waagent[1910]: 2026-03-07T01:15:01.599609Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Mar 7 01:15:02.551909 waagent[1910]: 2026-03-07T01:15:02.551818Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Mar 7 01:15:02.552630 waagent[1910]: 2026-03-07T01:15:02.552534Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Mar 7 01:15:02.553479 waagent[1910]: 2026-03-07T01:15:02.553398Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 7 01:15:02.553885 waagent[1910]: 2026-03-07T01:15:02.553831Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 7 01:15:02.554085 waagent[1910]: 2026-03-07T01:15:02.553998Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 7 01:15:02.554384 waagent[1910]: 2026-03-07T01:15:02.554310Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 7 01:15:02.554444 waagent[1910]: 2026-03-07T01:15:02.554381Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 7 01:15:02.554894 waagent[1910]: 2026-03-07T01:15:02.554719Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 7 01:15:02.554894 waagent[1910]: 2026-03-07T01:15:02.554788Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 7 01:15:02.555300 waagent[1910]: 2026-03-07T01:15:02.555248Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 7 01:15:02.555471 waagent[1910]: 2026-03-07T01:15:02.555427Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 7 01:15:02.556217 waagent[1910]: 2026-03-07T01:15:02.556168Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 7 01:15:02.556457 waagent[1910]: 2026-03-07T01:15:02.556409Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 7 01:15:02.556538 waagent[1910]: 2026-03-07T01:15:02.556496Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 7 01:15:02.556538 waagent[1910]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 7 01:15:02.556538 waagent[1910]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Mar 7 01:15:02.556538 waagent[1910]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 7 01:15:02.556538 waagent[1910]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 7 01:15:02.556538 waagent[1910]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 7 01:15:02.556538 waagent[1910]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 7 01:15:02.557731 waagent[1910]: 2026-03-07T01:15:02.557684Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 7 01:15:02.557913 waagent[1910]: 2026-03-07T01:15:02.557868Z INFO EnvHandler ExtHandler Configure routes Mar 7 01:15:02.558036 waagent[1910]: 2026-03-07T01:15:02.557967Z INFO EnvHandler ExtHandler Gateway:None Mar 7 01:15:02.558096 waagent[1910]: 2026-03-07T01:15:02.558054Z INFO EnvHandler ExtHandler Routes:None Mar 7 01:15:02.564578 waagent[1910]: 2026-03-07T01:15:02.564514Z INFO ExtHandler ExtHandler Mar 7 01:15:02.564684 waagent[1910]: 2026-03-07T01:15:02.564641Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 0a3d5708-c9e4-4ce4-9712-551db37954a6 correlation 299e711e-e8cc-45b2-a511-327fd3c1d1e7 created: 2026-03-07T01:13:58.275032Z] Mar 7 01:15:02.565044 waagent[1910]: 2026-03-07T01:15:02.564997Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 7 01:15:02.565598 waagent[1910]: 2026-03-07T01:15:02.565551Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Mar 7 01:15:02.576388 waagent[1910]: 2026-03-07T01:15:02.576333Z INFO MonitorHandler ExtHandler Network interfaces: Mar 7 01:15:02.576388 waagent[1910]: Executing ['ip', '-a', '-o', 'link']: Mar 7 01:15:02.576388 waagent[1910]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 7 01:15:02.576388 waagent[1910]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:e0:1a:c9 brd ff:ff:ff:ff:ff:ff Mar 7 01:15:02.576388 waagent[1910]: 3: enP48501s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:e0:1a:c9 brd ff:ff:ff:ff:ff:ff\ altname enP48501p0s2 Mar 7 01:15:02.576388 waagent[1910]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 7 01:15:02.576388 waagent[1910]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 7 01:15:02.576388 waagent[1910]: 2: eth0 inet 10.200.8.30/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 7 01:15:02.576388 waagent[1910]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 7 01:15:02.576388 waagent[1910]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Mar 7 01:15:02.576388 waagent[1910]: 2: eth0 inet6 fe80::6245:bdff:fee0:1ac9/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 7 01:15:02.602723 waagent[1910]: 2026-03-07T01:15:02.602660Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: A8309B41-6D0F-4284-A8C5-D1C151CA4522;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Mar 7 01:15:02.621975 waagent[1910]: 2026-03-07T01:15:02.621912Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Mar 7 01:15:02.621975 waagent[1910]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 7 01:15:02.621975 waagent[1910]: pkts bytes target prot opt in out source destination Mar 7 01:15:02.621975 waagent[1910]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 7 01:15:02.621975 waagent[1910]: pkts bytes target prot opt in out source destination Mar 7 01:15:02.621975 waagent[1910]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 7 01:15:02.621975 waagent[1910]: pkts bytes target prot opt in out source destination Mar 7 01:15:02.621975 waagent[1910]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 7 01:15:02.621975 waagent[1910]: 1 52 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 7 01:15:02.621975 waagent[1910]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 7 01:15:02.625275 waagent[1910]: 2026-03-07T01:15:02.625218Z INFO EnvHandler ExtHandler Current Firewall rules: Mar 7 01:15:02.625275 waagent[1910]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 7 01:15:02.625275 waagent[1910]: pkts bytes target prot opt in out source destination Mar 7 01:15:02.625275 waagent[1910]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 7 01:15:02.625275 waagent[1910]: pkts bytes target prot opt in out source destination Mar 7 01:15:02.625275 waagent[1910]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 7 01:15:02.625275 waagent[1910]: pkts bytes target prot opt in out source destination Mar 7 01:15:02.625275 waagent[1910]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 7 01:15:02.625275 waagent[1910]: 1 52 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 7 01:15:02.625275 waagent[1910]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 7 01:15:02.625649 waagent[1910]: 2026-03-07T01:15:02.625521Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Mar 7 01:15:08.221516 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 7 01:15:08.227339 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:15:08.334668 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:15:08.347440 (kubelet)[2144]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:15:09.057616 kubelet[2144]: E0307 01:15:09.057536 2144 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:15:09.061390 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:15:09.061618 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:15:15.041568 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 7 01:15:15.042962 systemd[1]: Started sshd@0-10.200.8.30:22-10.200.16.10:53272.service - OpenSSH per-connection server daemon (10.200.16.10:53272). Mar 7 01:15:15.680825 sshd[2153]: Accepted publickey for core from 10.200.16.10 port 53272 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:15:15.682399 sshd[2153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:15:15.687124 systemd-logind[1702]: New session 3 of user core. Mar 7 01:15:15.696318 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 7 01:15:16.230144 systemd[1]: Started sshd@1-10.200.8.30:22-10.200.16.10:53274.service - OpenSSH per-connection server daemon (10.200.16.10:53274). Mar 7 01:15:16.857624 sshd[2158]: Accepted publickey for core from 10.200.16.10 port 53274 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:15:16.859141 sshd[2158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:15:16.863194 systemd-logind[1702]: New session 4 of user core. Mar 7 01:15:16.873269 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 7 01:15:17.302534 sshd[2158]: pam_unix(sshd:session): session closed for user core Mar 7 01:15:17.306428 systemd-logind[1702]: Session 4 logged out. Waiting for processes to exit. Mar 7 01:15:17.307339 systemd[1]: sshd@1-10.200.8.30:22-10.200.16.10:53274.service: Deactivated successfully. Mar 7 01:15:17.309695 systemd[1]: session-4.scope: Deactivated successfully. Mar 7 01:15:17.310746 systemd-logind[1702]: Removed session 4. Mar 7 01:15:17.412083 systemd[1]: Started sshd@2-10.200.8.30:22-10.200.16.10:53280.service - OpenSSH per-connection server daemon (10.200.16.10:53280). Mar 7 01:15:18.035831 sshd[2165]: Accepted publickey for core from 10.200.16.10 port 53280 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:15:18.037322 sshd[2165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:15:18.042179 systemd-logind[1702]: New session 5 of user core. Mar 7 01:15:18.051266 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 7 01:15:18.473723 sshd[2165]: pam_unix(sshd:session): session closed for user core Mar 7 01:15:18.477659 systemd[1]: sshd@2-10.200.8.30:22-10.200.16.10:53280.service: Deactivated successfully. Mar 7 01:15:18.479578 systemd[1]: session-5.scope: Deactivated successfully. Mar 7 01:15:18.480323 systemd-logind[1702]: Session 5 logged out. Waiting for processes to exit. Mar 7 01:15:18.481708 systemd-logind[1702]: Removed session 5. Mar 7 01:15:18.588139 systemd[1]: Started sshd@3-10.200.8.30:22-10.200.16.10:53288.service - OpenSSH per-connection server daemon (10.200.16.10:53288). Mar 7 01:15:19.102634 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 7 01:15:19.109753 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:15:19.216063 sshd[2172]: Accepted publickey for core from 10.200.16.10 port 53288 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:15:19.216611 sshd[2172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:15:19.219716 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:15:19.226251 (kubelet)[2181]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:15:19.227693 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 7 01:15:19.228241 systemd-logind[1702]: New session 6 of user core. Mar 7 01:15:19.266632 kubelet[2181]: E0307 01:15:19.266537 2181 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:15:19.269199 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:15:19.269413 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:15:19.659808 sshd[2172]: pam_unix(sshd:session): session closed for user core Mar 7 01:15:19.663177 systemd[1]: sshd@3-10.200.8.30:22-10.200.16.10:53288.service: Deactivated successfully. Mar 7 01:15:19.665323 systemd[1]: session-6.scope: Deactivated successfully. Mar 7 01:15:19.666756 systemd-logind[1702]: Session 6 logged out. Waiting for processes to exit. Mar 7 01:15:19.667919 systemd-logind[1702]: Removed session 6. Mar 7 01:15:19.769144 systemd[1]: Started sshd@4-10.200.8.30:22-10.200.16.10:53300.service - OpenSSH per-connection server daemon (10.200.16.10:53300). Mar 7 01:15:19.791166 chronyd[1722]: Selected source PHC0 Mar 7 01:15:20.394618 sshd[2194]: Accepted publickey for core from 10.200.16.10 port 53300 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:15:20.396091 sshd[2194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:15:20.400163 systemd-logind[1702]: New session 7 of user core. Mar 7 01:15:20.409294 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 7 01:15:20.765221 sudo[2197]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 7 01:15:20.765601 sudo[2197]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:15:20.783548 sudo[2197]: pam_unix(sudo:session): session closed for user root Mar 7 01:15:20.883928 sshd[2194]: pam_unix(sshd:session): session closed for user core Mar 7 01:15:20.888235 systemd-logind[1702]: Session 7 logged out. Waiting for processes to exit. Mar 7 01:15:20.888810 systemd[1]: sshd@4-10.200.8.30:22-10.200.16.10:53300.service: Deactivated successfully. Mar 7 01:15:20.890811 systemd[1]: session-7.scope: Deactivated successfully. Mar 7 01:15:20.891736 systemd-logind[1702]: Removed session 7. Mar 7 01:15:20.999436 systemd[1]: Started sshd@5-10.200.8.30:22-10.200.16.10:55698.service - OpenSSH per-connection server daemon (10.200.16.10:55698). Mar 7 01:15:21.633684 sshd[2202]: Accepted publickey for core from 10.200.16.10 port 55698 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:15:21.635301 sshd[2202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:15:21.639188 systemd-logind[1702]: New session 8 of user core. Mar 7 01:15:21.646273 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 7 01:15:21.978919 sudo[2206]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 7 01:15:21.979303 sudo[2206]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:15:21.982610 sudo[2206]: pam_unix(sudo:session): session closed for user root Mar 7 01:15:21.987654 sudo[2205]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 7 01:15:21.988001 sudo[2205]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:15:22.002436 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 7 01:15:22.003973 auditctl[2209]: No rules Mar 7 01:15:22.005153 systemd[1]: audit-rules.service: Deactivated successfully. Mar 7 01:15:22.005338 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 7 01:15:22.007635 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:15:22.040176 augenrules[2227]: No rules Mar 7 01:15:22.041774 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:15:22.043313 sudo[2205]: pam_unix(sudo:session): session closed for user root Mar 7 01:15:22.144263 sshd[2202]: pam_unix(sshd:session): session closed for user core Mar 7 01:15:22.147588 systemd[1]: sshd@5-10.200.8.30:22-10.200.16.10:55698.service: Deactivated successfully. Mar 7 01:15:22.149574 systemd[1]: session-8.scope: Deactivated successfully. Mar 7 01:15:22.150972 systemd-logind[1702]: Session 8 logged out. Waiting for processes to exit. Mar 7 01:15:22.151930 systemd-logind[1702]: Removed session 8. Mar 7 01:15:22.255064 systemd[1]: Started sshd@6-10.200.8.30:22-10.200.16.10:55706.service - OpenSSH per-connection server daemon (10.200.16.10:55706). Mar 7 01:15:22.883144 sshd[2235]: Accepted publickey for core from 10.200.16.10 port 55706 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:15:22.884004 sshd[2235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:15:22.889046 systemd-logind[1702]: New session 9 of user core. Mar 7 01:15:22.894274 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 7 01:15:23.226653 sudo[2238]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 7 01:15:23.227024 sudo[2238]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:15:25.015436 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 7 01:15:25.016874 (dockerd)[2254]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 7 01:15:27.012857 dockerd[2254]: time="2026-03-07T01:15:27.012792731Z" level=info msg="Starting up" Mar 7 01:15:27.652999 dockerd[2254]: time="2026-03-07T01:15:27.652949031Z" level=info msg="Loading containers: start." Mar 7 01:15:27.910364 kernel: Initializing XFRM netlink socket Mar 7 01:15:28.106694 systemd-networkd[1367]: docker0: Link UP Mar 7 01:15:28.137626 dockerd[2254]: time="2026-03-07T01:15:28.137420037Z" level=info msg="Loading containers: done." Mar 7 01:15:28.233778 dockerd[2254]: time="2026-03-07T01:15:28.233725951Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 7 01:15:28.234030 dockerd[2254]: time="2026-03-07T01:15:28.233848060Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 7 01:15:28.234030 dockerd[2254]: time="2026-03-07T01:15:28.233984669Z" level=info msg="Daemon has completed initialization" Mar 7 01:15:28.296528 dockerd[2254]: time="2026-03-07T01:15:28.295799579Z" level=info msg="API listen on /run/docker.sock" Mar 7 01:15:28.296242 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 7 01:15:28.869541 containerd[1718]: time="2026-03-07T01:15:28.869500876Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 7 01:15:29.471584 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 7 01:15:29.477331 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:15:30.130631 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:15:30.135368 (kubelet)[2398]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:15:30.273707 kubelet[2398]: E0307 01:15:30.273652 2398 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:15:30.276000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:15:30.276244 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:15:30.404376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2197342809.mount: Deactivated successfully. Mar 7 01:15:32.091563 containerd[1718]: time="2026-03-07T01:15:32.091497058Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:32.094377 containerd[1718]: time="2026-03-07T01:15:32.094127198Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27074505" Mar 7 01:15:32.097834 containerd[1718]: time="2026-03-07T01:15:32.097771153Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:32.102906 containerd[1718]: time="2026-03-07T01:15:32.102633427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:32.104037 containerd[1718]: time="2026-03-07T01:15:32.103850145Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 3.234304467s" Mar 7 01:15:32.104037 containerd[1718]: time="2026-03-07T01:15:32.103895646Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 7 01:15:32.104750 containerd[1718]: time="2026-03-07T01:15:32.104720858Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 7 01:15:33.772222 containerd[1718]: time="2026-03-07T01:15:33.772168459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:33.775993 containerd[1718]: time="2026-03-07T01:15:33.775926816Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21165831" Mar 7 01:15:33.779923 containerd[1718]: time="2026-03-07T01:15:33.779863675Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:33.784741 containerd[1718]: time="2026-03-07T01:15:33.784705148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:33.785919 containerd[1718]: time="2026-03-07T01:15:33.785763164Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 1.681006605s" Mar 7 01:15:33.785919 containerd[1718]: time="2026-03-07T01:15:33.785813265Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 7 01:15:33.786781 containerd[1718]: time="2026-03-07T01:15:33.786447975Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 7 01:15:35.129486 containerd[1718]: time="2026-03-07T01:15:35.129420371Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:35.132979 containerd[1718]: time="2026-03-07T01:15:35.132921624Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15729832" Mar 7 01:15:35.136407 containerd[1718]: time="2026-03-07T01:15:35.136355676Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:35.143662 containerd[1718]: time="2026-03-07T01:15:35.143613086Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:35.144808 containerd[1718]: time="2026-03-07T01:15:35.144666902Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 1.358185827s" Mar 7 01:15:35.144808 containerd[1718]: time="2026-03-07T01:15:35.144703202Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 7 01:15:35.145562 containerd[1718]: time="2026-03-07T01:15:35.145529615Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 7 01:15:36.329129 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Mar 7 01:15:36.391982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2367988937.mount: Deactivated successfully. Mar 7 01:15:36.817558 containerd[1718]: time="2026-03-07T01:15:36.817500937Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:36.819808 containerd[1718]: time="2026-03-07T01:15:36.819734969Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25861778" Mar 7 01:15:36.823207 containerd[1718]: time="2026-03-07T01:15:36.823142519Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:36.827546 containerd[1718]: time="2026-03-07T01:15:36.827470683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:36.828462 containerd[1718]: time="2026-03-07T01:15:36.828050191Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 1.682391675s" Mar 7 01:15:36.828462 containerd[1718]: time="2026-03-07T01:15:36.828090892Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 7 01:15:36.828809 containerd[1718]: time="2026-03-07T01:15:36.828785602Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 7 01:15:37.531077 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3162985657.mount: Deactivated successfully. Mar 7 01:15:39.021023 containerd[1718]: time="2026-03-07T01:15:39.020964957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:39.024373 containerd[1718]: time="2026-03-07T01:15:39.024117303Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388015" Mar 7 01:15:39.027496 containerd[1718]: time="2026-03-07T01:15:39.027435552Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:39.034044 containerd[1718]: time="2026-03-07T01:15:39.033892747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:39.035133 containerd[1718]: time="2026-03-07T01:15:39.034970963Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.206148559s" Mar 7 01:15:39.035133 containerd[1718]: time="2026-03-07T01:15:39.035010563Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 7 01:15:39.035714 containerd[1718]: time="2026-03-07T01:15:39.035669973Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 7 01:15:39.589486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3072384252.mount: Deactivated successfully. Mar 7 01:15:39.609397 containerd[1718]: time="2026-03-07T01:15:39.609343287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:39.612397 containerd[1718]: time="2026-03-07T01:15:39.612241730Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321226" Mar 7 01:15:39.615919 containerd[1718]: time="2026-03-07T01:15:39.615685180Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:39.620519 containerd[1718]: time="2026-03-07T01:15:39.620467751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:39.621788 containerd[1718]: time="2026-03-07T01:15:39.621235162Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 585.423787ms" Mar 7 01:15:39.621788 containerd[1718]: time="2026-03-07T01:15:39.621271962Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 7 01:15:39.622181 containerd[1718]: time="2026-03-07T01:15:39.622151475Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 7 01:15:40.246166 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3488996600.mount: Deactivated successfully. Mar 7 01:15:40.471819 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 7 01:15:40.481214 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:15:40.646285 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:15:40.661645 (kubelet)[2554]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:15:40.737700 kubelet[2554]: E0307 01:15:40.737643 2554 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:15:40.740119 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:15:40.740345 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:15:41.680457 update_engine[1703]: I20260307 01:15:41.680392 1703 update_attempter.cc:509] Updating boot flags... Mar 7 01:15:41.759270 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2599) Mar 7 01:15:42.469070 containerd[1718]: time="2026-03-07T01:15:42.469016733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:42.471721 containerd[1718]: time="2026-03-07T01:15:42.471660572Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860682" Mar 7 01:15:42.474823 containerd[1718]: time="2026-03-07T01:15:42.474771918Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:42.479603 containerd[1718]: time="2026-03-07T01:15:42.479558388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:42.480763 containerd[1718]: time="2026-03-07T01:15:42.480621503Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 2.858431527s" Mar 7 01:15:42.480763 containerd[1718]: time="2026-03-07T01:15:42.480657404Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 7 01:15:45.398333 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:15:45.404419 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:15:45.443835 systemd[1]: Reloading requested from client PID 2679 ('systemctl') (unit session-9.scope)... Mar 7 01:15:45.444025 systemd[1]: Reloading... Mar 7 01:15:45.570131 zram_generator::config[2716]: No configuration found. Mar 7 01:15:45.706662 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:15:45.786973 systemd[1]: Reloading finished in 342 ms. Mar 7 01:15:45.835808 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 7 01:15:45.835911 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 7 01:15:45.836352 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:15:45.837943 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:15:46.220112 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:15:46.234436 (kubelet)[2789]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:15:46.271529 kubelet[2789]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 01:15:46.271529 kubelet[2789]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:15:46.271978 kubelet[2789]: I0307 01:15:46.271598 2789 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 01:15:47.030860 kubelet[2789]: I0307 01:15:47.030715 2789 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 7 01:15:47.030860 kubelet[2789]: I0307 01:15:47.030750 2789 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:15:47.050738 kubelet[2789]: I0307 01:15:47.050348 2789 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 7 01:15:47.050738 kubelet[2789]: I0307 01:15:47.050417 2789 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:15:47.051115 kubelet[2789]: I0307 01:15:47.050997 2789 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 01:15:47.063999 kubelet[2789]: E0307 01:15:47.063936 2789 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.30:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.30:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:15:47.064196 kubelet[2789]: I0307 01:15:47.064170 2789 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:15:47.070131 kubelet[2789]: E0307 01:15:47.069921 2789 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:15:47.070131 kubelet[2789]: I0307 01:15:47.069996 2789 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 7 01:15:47.073738 kubelet[2789]: I0307 01:15:47.073707 2789 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 7 01:15:47.074809 kubelet[2789]: I0307 01:15:47.074769 2789 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:15:47.074989 kubelet[2789]: I0307 01:15:47.074807 2789 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-1070eafa86","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 01:15:47.075172 kubelet[2789]: I0307 01:15:47.074992 2789 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 01:15:47.075172 kubelet[2789]: I0307 01:15:47.075006 2789 container_manager_linux.go:306] "Creating device plugin manager" Mar 7 01:15:47.075172 kubelet[2789]: I0307 01:15:47.075139 2789 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 7 01:15:47.080275 kubelet[2789]: I0307 01:15:47.080249 2789 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:15:47.080481 kubelet[2789]: I0307 01:15:47.080465 2789 kubelet.go:475] "Attempting to sync node with API server" Mar 7 01:15:47.080549 kubelet[2789]: I0307 01:15:47.080494 2789 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:15:47.080549 kubelet[2789]: I0307 01:15:47.080524 2789 kubelet.go:387] "Adding apiserver pod source" Mar 7 01:15:47.080549 kubelet[2789]: I0307 01:15:47.080546 2789 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:15:47.084304 kubelet[2789]: E0307 01:15:47.084257 2789 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.30:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.30:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:15:47.084929 kubelet[2789]: E0307 01:15:47.084773 2789 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-1070eafa86&limit=500&resourceVersion=0\": dial tcp 10.200.8.30:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:15:47.084929 kubelet[2789]: I0307 01:15:47.084891 2789 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:15:47.087122 kubelet[2789]: I0307 01:15:47.085513 2789 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:15:47.087122 kubelet[2789]: I0307 01:15:47.085562 2789 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 7 01:15:47.087122 kubelet[2789]: W0307 01:15:47.085623 2789 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 7 01:15:47.088941 kubelet[2789]: I0307 01:15:47.088923 2789 server.go:1262] "Started kubelet" Mar 7 01:15:47.095821 kubelet[2789]: I0307 01:15:47.095796 2789 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 01:15:47.098362 kubelet[2789]: E0307 01:15:47.096850 2789 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.30:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.30:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-1070eafa86.189a6a2d86a724d6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-1070eafa86,UID:ci-4081.3.6-n-1070eafa86,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-1070eafa86,},FirstTimestamp:2026-03-07 01:15:47.08888495 +0000 UTC m=+0.851133647,LastTimestamp:2026-03-07 01:15:47.08888495 +0000 UTC m=+0.851133647,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-1070eafa86,}" Mar 7 01:15:47.102207 kubelet[2789]: I0307 01:15:47.102175 2789 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:15:47.103803 kubelet[2789]: I0307 01:15:47.103780 2789 server.go:310] "Adding debug handlers to kubelet server" Mar 7 01:15:47.109914 kubelet[2789]: I0307 01:15:47.109837 2789 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:15:47.110033 kubelet[2789]: I0307 01:15:47.109949 2789 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 7 01:15:47.110201 kubelet[2789]: E0307 01:15:47.109890 2789 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 01:15:47.110319 kubelet[2789]: I0307 01:15:47.110256 2789 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:15:47.110576 kubelet[2789]: I0307 01:15:47.110554 2789 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:15:47.111355 kubelet[2789]: I0307 01:15:47.111339 2789 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 7 01:15:47.111704 kubelet[2789]: I0307 01:15:47.111681 2789 reconciler.go:29] "Reconciler: start to sync state" Mar 7 01:15:47.112651 kubelet[2789]: E0307 01:15:47.112631 2789 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-1070eafa86\" not found" Mar 7 01:15:47.113520 kubelet[2789]: I0307 01:15:47.113503 2789 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 7 01:15:47.114033 kubelet[2789]: E0307 01:15:47.113898 2789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-1070eafa86?timeout=10s\": dial tcp 10.200.8.30:6443: connect: connection refused" interval="200ms" Mar 7 01:15:47.114214 kubelet[2789]: E0307 01:15:47.114184 2789 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.30:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:15:47.116300 kubelet[2789]: I0307 01:15:47.116273 2789 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:15:47.116300 kubelet[2789]: I0307 01:15:47.116294 2789 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:15:47.116426 kubelet[2789]: I0307 01:15:47.116361 2789 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:15:47.160342 kubelet[2789]: I0307 01:15:47.160278 2789 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 7 01:15:47.162552 kubelet[2789]: I0307 01:15:47.162362 2789 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 7 01:15:47.162552 kubelet[2789]: I0307 01:15:47.162410 2789 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 7 01:15:47.162884 kubelet[2789]: I0307 01:15:47.162448 2789 kubelet.go:2428] "Starting kubelet main sync loop" Mar 7 01:15:47.163006 kubelet[2789]: E0307 01:15:47.162979 2789 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:15:47.164575 kubelet[2789]: E0307 01:15:47.164542 2789 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.30:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:15:47.195475 kubelet[2789]: I0307 01:15:47.195440 2789 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 01:15:47.195475 kubelet[2789]: I0307 01:15:47.195490 2789 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 01:15:47.195475 kubelet[2789]: I0307 01:15:47.195512 2789 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:15:47.201464 kubelet[2789]: I0307 01:15:47.201431 2789 policy_none.go:49] "None policy: Start" Mar 7 01:15:47.201464 kubelet[2789]: I0307 01:15:47.201457 2789 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 7 01:15:47.201464 kubelet[2789]: I0307 01:15:47.201471 2789 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 7 01:15:47.206647 kubelet[2789]: I0307 01:15:47.206623 2789 policy_none.go:47] "Start" Mar 7 01:15:47.210893 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 7 01:15:47.216693 kubelet[2789]: E0307 01:15:47.216666 2789 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-1070eafa86\" not found" Mar 7 01:15:47.228336 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 7 01:15:47.231687 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 7 01:15:47.242158 kubelet[2789]: E0307 01:15:47.241922 2789 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:15:47.242267 kubelet[2789]: I0307 01:15:47.242196 2789 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 01:15:47.242267 kubelet[2789]: I0307 01:15:47.242212 2789 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:15:47.243318 kubelet[2789]: I0307 01:15:47.242647 2789 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 01:15:47.243927 kubelet[2789]: E0307 01:15:47.243903 2789 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:15:47.244009 kubelet[2789]: E0307 01:15:47.243963 2789 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-1070eafa86\" not found" Mar 7 01:15:47.276252 systemd[1]: Created slice kubepods-burstable-pod7a2ff459ee11aab5503a9cd16fc163d8.slice - libcontainer container kubepods-burstable-pod7a2ff459ee11aab5503a9cd16fc163d8.slice. Mar 7 01:15:47.282845 kubelet[2789]: E0307 01:15:47.282754 2789 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-1070eafa86\" not found" node="ci-4081.3.6-n-1070eafa86" Mar 7 01:15:47.288858 systemd[1]: Created slice kubepods-burstable-pod55a1045a995fac32db4ce39dd41e51e9.slice - libcontainer container kubepods-burstable-pod55a1045a995fac32db4ce39dd41e51e9.slice. Mar 7 01:15:47.290846 kubelet[2789]: E0307 01:15:47.290817 2789 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-1070eafa86\" not found" node="ci-4081.3.6-n-1070eafa86" Mar 7 01:15:47.304749 systemd[1]: Created slice kubepods-burstable-pod29ade1129e920079b467063aa8400f4a.slice - libcontainer container kubepods-burstable-pod29ade1129e920079b467063aa8400f4a.slice. Mar 7 01:15:47.306442 kubelet[2789]: E0307 01:15:47.306415 2789 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-1070eafa86\" not found" node="ci-4081.3.6-n-1070eafa86" Mar 7 01:15:47.314908 kubelet[2789]: E0307 01:15:47.314868 2789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-1070eafa86?timeout=10s\": dial tcp 10.200.8.30:6443: connect: connection refused" interval="400ms" Mar 7 01:15:47.317006 kubelet[2789]: I0307 01:15:47.316980 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/29ade1129e920079b467063aa8400f4a-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-1070eafa86\" (UID: \"29ade1129e920079b467063aa8400f4a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-1070eafa86" Mar 7 01:15:47.317194 kubelet[2789]: I0307 01:15:47.317013 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/29ade1129e920079b467063aa8400f4a-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-1070eafa86\" (UID: \"29ade1129e920079b467063aa8400f4a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-1070eafa86" Mar 7 01:15:47.317194 kubelet[2789]: I0307 01:15:47.317039 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7a2ff459ee11aab5503a9cd16fc163d8-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-1070eafa86\" (UID: \"7a2ff459ee11aab5503a9cd16fc163d8\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-1070eafa86" Mar 7 01:15:47.317194 kubelet[2789]: I0307 01:15:47.317064 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/29ade1129e920079b467063aa8400f4a-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-1070eafa86\" (UID: \"29ade1129e920079b467063aa8400f4a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-1070eafa86" Mar 7 01:15:47.317194 kubelet[2789]: I0307 01:15:47.317087 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/29ade1129e920079b467063aa8400f4a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-1070eafa86\" (UID: \"29ade1129e920079b467063aa8400f4a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-1070eafa86" Mar 7 01:15:47.317194 kubelet[2789]: I0307 01:15:47.317129 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55a1045a995fac32db4ce39dd41e51e9-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-1070eafa86\" (UID: \"55a1045a995fac32db4ce39dd41e51e9\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-1070eafa86" Mar 7 01:15:47.317362 kubelet[2789]: I0307 01:15:47.317155 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55a1045a995fac32db4ce39dd41e51e9-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-1070eafa86\" (UID: \"55a1045a995fac32db4ce39dd41e51e9\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-1070eafa86" Mar 7 01:15:47.317362 kubelet[2789]: I0307 01:15:47.317177 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55a1045a995fac32db4ce39dd41e51e9-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-1070eafa86\" (UID: \"55a1045a995fac32db4ce39dd41e51e9\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-1070eafa86" Mar 7 01:15:47.317362 kubelet[2789]: I0307 01:15:47.317197 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/29ade1129e920079b467063aa8400f4a-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-1070eafa86\" (UID: \"29ade1129e920079b467063aa8400f4a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-1070eafa86" Mar 7 01:15:47.346024 kubelet[2789]: I0307 01:15:47.345983 2789 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-1070eafa86" Mar 7 01:15:47.346391 kubelet[2789]: E0307 01:15:47.346359 2789 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.30:6443/api/v1/nodes\": dial tcp 10.200.8.30:6443: connect: connection refused" node="ci-4081.3.6-n-1070eafa86" Mar 7 01:15:47.548635 kubelet[2789]: I0307 01:15:47.548515 2789 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-1070eafa86" Mar 7 01:15:47.548904 kubelet[2789]: E0307 01:15:47.548872 2789 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.30:6443/api/v1/nodes\": dial tcp 10.200.8.30:6443: connect: connection refused" node="ci-4081.3.6-n-1070eafa86" Mar 7 01:15:47.589495 containerd[1718]: time="2026-03-07T01:15:47.589450322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-1070eafa86,Uid:7a2ff459ee11aab5503a9cd16fc163d8,Namespace:kube-system,Attempt:0,}" Mar 7 01:15:47.596492 containerd[1718]: time="2026-03-07T01:15:47.596459230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-1070eafa86,Uid:55a1045a995fac32db4ce39dd41e51e9,Namespace:kube-system,Attempt:0,}" Mar 7 01:15:47.612999 containerd[1718]: time="2026-03-07T01:15:47.612961183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-1070eafa86,Uid:29ade1129e920079b467063aa8400f4a,Namespace:kube-system,Attempt:0,}" Mar 7 01:15:47.715614 kubelet[2789]: E0307 01:15:47.715565 2789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-1070eafa86?timeout=10s\": dial tcp 10.200.8.30:6443: connect: connection refused" interval="800ms" Mar 7 01:15:47.951821 kubelet[2789]: I0307 01:15:47.951463 2789 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-1070eafa86" Mar 7 01:15:47.951821 kubelet[2789]: E0307 01:15:47.951778 2789 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.30:6443/api/v1/nodes\": dial tcp 10.200.8.30:6443: connect: connection refused" node="ci-4081.3.6-n-1070eafa86" Mar 7 01:15:47.988759 kubelet[2789]: E0307 01:15:47.988718 2789 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.30:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.30:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:15:47.998181 kubelet[2789]: E0307 01:15:47.998043 2789 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.30:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.30:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-1070eafa86.189a6a2d86a724d6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-1070eafa86,UID:ci-4081.3.6-n-1070eafa86,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-1070eafa86,},FirstTimestamp:2026-03-07 01:15:47.08888495 +0000 UTC m=+0.851133647,LastTimestamp:2026-03-07 01:15:47.08888495 +0000 UTC m=+0.851133647,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-1070eafa86,}" Mar 7 01:15:48.068607 kubelet[2789]: E0307 01:15:48.068565 2789 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-1070eafa86&limit=500&resourceVersion=0\": dial tcp 10.200.8.30:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:15:48.177169 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount9201918.mount: Deactivated successfully. Mar 7 01:15:48.207743 containerd[1718]: time="2026-03-07T01:15:48.207599397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:15:48.209817 containerd[1718]: time="2026-03-07T01:15:48.209709829Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Mar 7 01:15:48.212830 containerd[1718]: time="2026-03-07T01:15:48.212783977Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:15:48.220445 containerd[1718]: time="2026-03-07T01:15:48.219719083Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:15:48.220445 containerd[1718]: time="2026-03-07T01:15:48.219776584Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:15:48.222242 containerd[1718]: time="2026-03-07T01:15:48.222198821Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:15:48.224398 containerd[1718]: time="2026-03-07T01:15:48.224344554Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:15:48.228090 containerd[1718]: time="2026-03-07T01:15:48.228036010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:15:48.229146 containerd[1718]: time="2026-03-07T01:15:48.228871623Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 615.830439ms" Mar 7 01:15:48.231052 containerd[1718]: time="2026-03-07T01:15:48.231005956Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 641.477233ms" Mar 7 01:15:48.238451 containerd[1718]: time="2026-03-07T01:15:48.238415569Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 641.889638ms" Mar 7 01:15:48.265796 kubelet[2789]: E0307 01:15:48.265753 2789 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.30:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:15:48.516870 kubelet[2789]: E0307 01:15:48.516821 2789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-1070eafa86?timeout=10s\": dial tcp 10.200.8.30:6443: connect: connection refused" interval="1.6s" Mar 7 01:15:48.648864 kubelet[2789]: E0307 01:15:48.648621 2789 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.30:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:15:48.754515 kubelet[2789]: I0307 01:15:48.754373 2789 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-1070eafa86" Mar 7 01:15:48.754887 kubelet[2789]: E0307 01:15:48.754819 2789 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.30:6443/api/v1/nodes\": dial tcp 10.200.8.30:6443: connect: connection refused" node="ci-4081.3.6-n-1070eafa86" Mar 7 01:15:49.138378 kubelet[2789]: E0307 01:15:49.138331 2789 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.30:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.30:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:15:49.380763 containerd[1718]: time="2026-03-07T01:15:49.380367373Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:15:49.380763 containerd[1718]: time="2026-03-07T01:15:49.380435674Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:15:49.380763 containerd[1718]: time="2026-03-07T01:15:49.380474574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:49.380763 containerd[1718]: time="2026-03-07T01:15:49.380591476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:49.382074 containerd[1718]: time="2026-03-07T01:15:49.381310287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:15:49.382074 containerd[1718]: time="2026-03-07T01:15:49.381368788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:15:49.382074 containerd[1718]: time="2026-03-07T01:15:49.381390189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:49.382074 containerd[1718]: time="2026-03-07T01:15:49.381469990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:49.396853 containerd[1718]: time="2026-03-07T01:15:49.394349887Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:15:49.396853 containerd[1718]: time="2026-03-07T01:15:49.394413288Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:15:49.396853 containerd[1718]: time="2026-03-07T01:15:49.394455689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:49.396853 containerd[1718]: time="2026-03-07T01:15:49.394588991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:49.440346 systemd[1]: Started cri-containerd-a053cc45cc79432303e1b3008b820ba4cd28f2da7e3ef0ae52c00ff8692e21d7.scope - libcontainer container a053cc45cc79432303e1b3008b820ba4cd28f2da7e3ef0ae52c00ff8692e21d7. Mar 7 01:15:49.441883 systemd[1]: Started cri-containerd-e287d9d3dc1ae98ca78f5b683ce7397d2f09ffcc129834bee57f6f472587c129.scope - libcontainer container e287d9d3dc1ae98ca78f5b683ce7397d2f09ffcc129834bee57f6f472587c129. Mar 7 01:15:49.444995 systemd[1]: Started cri-containerd-ea0675e81ffc56844ebd8a8d5d6e5e1907d95ca45747ec0bd37363f22781a456.scope - libcontainer container ea0675e81ffc56844ebd8a8d5d6e5e1907d95ca45747ec0bd37363f22781a456. Mar 7 01:15:49.518986 containerd[1718]: time="2026-03-07T01:15:49.518937497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-1070eafa86,Uid:29ade1129e920079b467063aa8400f4a,Namespace:kube-system,Attempt:0,} returns sandbox id \"e287d9d3dc1ae98ca78f5b683ce7397d2f09ffcc129834bee57f6f472587c129\"" Mar 7 01:15:49.536883 containerd[1718]: time="2026-03-07T01:15:49.536707469Z" level=info msg="CreateContainer within sandbox \"e287d9d3dc1ae98ca78f5b683ce7397d2f09ffcc129834bee57f6f472587c129\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 7 01:15:49.549592 containerd[1718]: time="2026-03-07T01:15:49.548174745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-1070eafa86,Uid:7a2ff459ee11aab5503a9cd16fc163d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea0675e81ffc56844ebd8a8d5d6e5e1907d95ca45747ec0bd37363f22781a456\"" Mar 7 01:15:49.556378 containerd[1718]: time="2026-03-07T01:15:49.556321270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-1070eafa86,Uid:55a1045a995fac32db4ce39dd41e51e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"a053cc45cc79432303e1b3008b820ba4cd28f2da7e3ef0ae52c00ff8692e21d7\"" Mar 7 01:15:49.561584 containerd[1718]: time="2026-03-07T01:15:49.561494049Z" level=info msg="CreateContainer within sandbox \"ea0675e81ffc56844ebd8a8d5d6e5e1907d95ca45747ec0bd37363f22781a456\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 7 01:15:49.567064 containerd[1718]: time="2026-03-07T01:15:49.567031834Z" level=info msg="CreateContainer within sandbox \"a053cc45cc79432303e1b3008b820ba4cd28f2da7e3ef0ae52c00ff8692e21d7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 7 01:15:49.596347 containerd[1718]: time="2026-03-07T01:15:49.596290382Z" level=info msg="CreateContainer within sandbox \"e287d9d3dc1ae98ca78f5b683ce7397d2f09ffcc129834bee57f6f472587c129\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c8d13603a87ab060effeb054beba36fcd0332cb6bc514cef6d5f6fbef583b8c6\"" Mar 7 01:15:49.597276 containerd[1718]: time="2026-03-07T01:15:49.597100695Z" level=info msg="StartContainer for \"c8d13603a87ab060effeb054beba36fcd0332cb6bc514cef6d5f6fbef583b8c6\"" Mar 7 01:15:49.617793 containerd[1718]: time="2026-03-07T01:15:49.617742011Z" level=info msg="CreateContainer within sandbox \"ea0675e81ffc56844ebd8a8d5d6e5e1907d95ca45747ec0bd37363f22781a456\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b9b5a8c66d86d2186476d3453ed976c0c48c8cd817bb681b0c462129ef46c130\"" Mar 7 01:15:49.620429 containerd[1718]: time="2026-03-07T01:15:49.619538539Z" level=info msg="StartContainer for \"b9b5a8c66d86d2186476d3453ed976c0c48c8cd817bb681b0c462129ef46c130\"" Mar 7 01:15:49.628323 systemd[1]: Started cri-containerd-c8d13603a87ab060effeb054beba36fcd0332cb6bc514cef6d5f6fbef583b8c6.scope - libcontainer container c8d13603a87ab060effeb054beba36fcd0332cb6bc514cef6d5f6fbef583b8c6. Mar 7 01:15:49.629445 containerd[1718]: time="2026-03-07T01:15:49.628064969Z" level=info msg="CreateContainer within sandbox \"a053cc45cc79432303e1b3008b820ba4cd28f2da7e3ef0ae52c00ff8692e21d7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"21bad5ee04cd5bd876d891095a002f6e8977ec673e13b70f0767926fe0f4cb3a\"" Mar 7 01:15:49.631223 containerd[1718]: time="2026-03-07T01:15:49.630428306Z" level=info msg="StartContainer for \"21bad5ee04cd5bd876d891095a002f6e8977ec673e13b70f0767926fe0f4cb3a\"" Mar 7 01:15:49.683309 systemd[1]: Started cri-containerd-b9b5a8c66d86d2186476d3453ed976c0c48c8cd817bb681b0c462129ef46c130.scope - libcontainer container b9b5a8c66d86d2186476d3453ed976c0c48c8cd817bb681b0c462129ef46c130. Mar 7 01:15:49.695448 systemd[1]: Started cri-containerd-21bad5ee04cd5bd876d891095a002f6e8977ec673e13b70f0767926fe0f4cb3a.scope - libcontainer container 21bad5ee04cd5bd876d891095a002f6e8977ec673e13b70f0767926fe0f4cb3a. Mar 7 01:15:49.724364 containerd[1718]: time="2026-03-07T01:15:49.724199643Z" level=info msg="StartContainer for \"c8d13603a87ab060effeb054beba36fcd0332cb6bc514cef6d5f6fbef583b8c6\" returns successfully" Mar 7 01:15:49.773373 containerd[1718]: time="2026-03-07T01:15:49.773293995Z" level=info msg="StartContainer for \"21bad5ee04cd5bd876d891095a002f6e8977ec673e13b70f0767926fe0f4cb3a\" returns successfully" Mar 7 01:15:49.815039 containerd[1718]: time="2026-03-07T01:15:49.814882033Z" level=info msg="StartContainer for \"b9b5a8c66d86d2186476d3453ed976c0c48c8cd817bb681b0c462129ef46c130\" returns successfully" Mar 7 01:15:50.183242 kubelet[2789]: E0307 01:15:50.181241 2789 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-1070eafa86\" not found" node="ci-4081.3.6-n-1070eafa86" Mar 7 01:15:50.186774 kubelet[2789]: E0307 01:15:50.185826 2789 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-1070eafa86\" not found" node="ci-4081.3.6-n-1070eafa86" Mar 7 01:15:50.190050 kubelet[2789]: E0307 01:15:50.190032 2789 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-1070eafa86\" not found" node="ci-4081.3.6-n-1070eafa86" Mar 7 01:15:50.360751 kubelet[2789]: I0307 01:15:50.360044 2789 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-1070eafa86" Mar 7 01:15:51.192425 kubelet[2789]: E0307 01:15:51.191804 2789 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-1070eafa86\" not found" node="ci-4081.3.6-n-1070eafa86" Mar 7 01:15:51.192425 kubelet[2789]: E0307 01:15:51.192272 2789 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-1070eafa86\" not found" node="ci-4081.3.6-n-1070eafa86" Mar 7 01:15:51.911842 kubelet[2789]: E0307 01:15:51.911794 2789 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-1070eafa86\" not found" node="ci-4081.3.6-n-1070eafa86" Mar 7 01:15:52.700074 kubelet[2789]: I0307 01:15:52.700035 2789 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-1070eafa86" Mar 7 01:15:52.700074 kubelet[2789]: E0307 01:15:52.700077 2789 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4081.3.6-n-1070eafa86\": node \"ci-4081.3.6-n-1070eafa86\" not found" Mar 7 01:15:52.714133 kubelet[2789]: I0307 01:15:52.713203 2789 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-1070eafa86" Mar 7 01:15:52.745615 kubelet[2789]: I0307 01:15:52.745581 2789 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 7 01:15:52.745793 kubelet[2789]: I0307 01:15:52.745762 2789 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-1070eafa86" Mar 7 01:15:52.760087 kubelet[2789]: I0307 01:15:52.760047 2789 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 7 01:15:52.760290 kubelet[2789]: I0307 01:15:52.760193 2789 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-1070eafa86" Mar 7 01:15:52.776133 kubelet[2789]: I0307 01:15:52.776083 2789 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 7 01:15:53.695311 kubelet[2789]: I0307 01:15:53.695274 2789 apiserver.go:52] "Watching apiserver" Mar 7 01:15:53.713719 kubelet[2789]: I0307 01:15:53.713685 2789 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 7 01:15:54.501729 systemd[1]: Reloading requested from client PID 3072 ('systemctl') (unit session-9.scope)... Mar 7 01:15:54.501747 systemd[1]: Reloading... Mar 7 01:15:54.621399 zram_generator::config[3109]: No configuration found. Mar 7 01:15:54.750921 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:15:54.844987 systemd[1]: Reloading finished in 342 ms. Mar 7 01:15:54.885372 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:15:54.895712 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 01:15:54.896159 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:15:54.896263 systemd[1]: kubelet.service: Consumed 1.242s CPU time, 126.3M memory peak, 0B memory swap peak. Mar 7 01:15:54.903419 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:15:55.017648 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:15:55.030494 (kubelet)[3179]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:15:55.072966 kubelet[3179]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 01:15:55.072966 kubelet[3179]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:15:55.073508 kubelet[3179]: I0307 01:15:55.073009 3179 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 01:15:55.079094 kubelet[3179]: I0307 01:15:55.079057 3179 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 7 01:15:55.079094 kubelet[3179]: I0307 01:15:55.079086 3179 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:15:55.079292 kubelet[3179]: I0307 01:15:55.079137 3179 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 7 01:15:55.079292 kubelet[3179]: I0307 01:15:55.079152 3179 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:15:55.079422 kubelet[3179]: I0307 01:15:55.079402 3179 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 01:15:55.080678 kubelet[3179]: I0307 01:15:55.080651 3179 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 7 01:15:55.082972 kubelet[3179]: I0307 01:15:55.082815 3179 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:15:55.089130 kubelet[3179]: E0307 01:15:55.088124 3179 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:15:55.089130 kubelet[3179]: I0307 01:15:55.088174 3179 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 7 01:15:55.091765 kubelet[3179]: I0307 01:15:55.091736 3179 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 7 01:15:55.092004 kubelet[3179]: I0307 01:15:55.091969 3179 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:15:55.092179 kubelet[3179]: I0307 01:15:55.092002 3179 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-1070eafa86","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 01:15:55.092331 kubelet[3179]: I0307 01:15:55.092182 3179 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 01:15:55.092331 kubelet[3179]: I0307 01:15:55.092196 3179 container_manager_linux.go:306] "Creating device plugin manager" Mar 7 01:15:55.092331 kubelet[3179]: I0307 01:15:55.092226 3179 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 7 01:15:55.092448 kubelet[3179]: I0307 01:15:55.092419 3179 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:15:55.092576 kubelet[3179]: I0307 01:15:55.092563 3179 kubelet.go:475] "Attempting to sync node with API server" Mar 7 01:15:55.092640 kubelet[3179]: I0307 01:15:55.092585 3179 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:15:55.092640 kubelet[3179]: I0307 01:15:55.092610 3179 kubelet.go:387] "Adding apiserver pod source" Mar 7 01:15:55.092640 kubelet[3179]: I0307 01:15:55.092622 3179 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:15:55.097084 kubelet[3179]: I0307 01:15:55.095041 3179 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:15:55.097084 kubelet[3179]: I0307 01:15:55.096466 3179 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:15:55.097084 kubelet[3179]: I0307 01:15:55.096507 3179 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 7 01:15:55.104352 kubelet[3179]: I0307 01:15:55.104312 3179 server.go:1262] "Started kubelet" Mar 7 01:15:55.108735 kubelet[3179]: I0307 01:15:55.108715 3179 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 01:15:55.114527 kubelet[3179]: I0307 01:15:55.114491 3179 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:15:55.116483 kubelet[3179]: I0307 01:15:55.116461 3179 server.go:310] "Adding debug handlers to kubelet server" Mar 7 01:15:55.121749 kubelet[3179]: I0307 01:15:55.121721 3179 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:15:55.122997 kubelet[3179]: I0307 01:15:55.122226 3179 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 7 01:15:55.122997 kubelet[3179]: I0307 01:15:55.122437 3179 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:15:55.122997 kubelet[3179]: I0307 01:15:55.121888 3179 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 7 01:15:55.122997 kubelet[3179]: I0307 01:15:55.122760 3179 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:15:55.124873 kubelet[3179]: I0307 01:15:55.121872 3179 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 7 01:15:55.125519 kubelet[3179]: I0307 01:15:55.125490 3179 reconciler.go:29] "Reconciler: start to sync state" Mar 7 01:15:55.128371 kubelet[3179]: E0307 01:15:55.128339 3179 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 01:15:55.129050 kubelet[3179]: I0307 01:15:55.129023 3179 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:15:55.131129 kubelet[3179]: I0307 01:15:55.129742 3179 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 7 01:15:55.131526 kubelet[3179]: I0307 01:15:55.131507 3179 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 7 01:15:55.131625 kubelet[3179]: I0307 01:15:55.131614 3179 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 7 01:15:55.131713 kubelet[3179]: I0307 01:15:55.131704 3179 kubelet.go:2428] "Starting kubelet main sync loop" Mar 7 01:15:55.131836 kubelet[3179]: E0307 01:15:55.131808 3179 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:15:55.138561 kubelet[3179]: I0307 01:15:55.138322 3179 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:15:55.138561 kubelet[3179]: I0307 01:15:55.138342 3179 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:15:55.179946 kubelet[3179]: I0307 01:15:55.179909 3179 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 01:15:55.179946 kubelet[3179]: I0307 01:15:55.179929 3179 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 01:15:55.179946 kubelet[3179]: I0307 01:15:55.179950 3179 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:15:55.180224 kubelet[3179]: I0307 01:15:55.180084 3179 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 7 01:15:55.180224 kubelet[3179]: I0307 01:15:55.180094 3179 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 7 01:15:55.180224 kubelet[3179]: I0307 01:15:55.180134 3179 policy_none.go:49] "None policy: Start" Mar 7 01:15:55.180224 kubelet[3179]: I0307 01:15:55.180147 3179 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 7 01:15:55.180224 kubelet[3179]: I0307 01:15:55.180159 3179 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 7 01:15:55.180429 kubelet[3179]: I0307 01:15:55.180292 3179 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 7 01:15:55.180429 kubelet[3179]: I0307 01:15:55.180319 3179 policy_none.go:47] "Start" Mar 7 01:15:55.185137 kubelet[3179]: E0307 01:15:55.185092 3179 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:15:55.185656 kubelet[3179]: I0307 01:15:55.185299 3179 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 01:15:55.185656 kubelet[3179]: I0307 01:15:55.185316 3179 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:15:55.185656 kubelet[3179]: I0307 01:15:55.185555 3179 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 01:15:55.188407 kubelet[3179]: E0307 01:15:55.187583 3179 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:15:55.233325 kubelet[3179]: I0307 01:15:55.233279 3179 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-1070eafa86" Mar 7 01:15:55.233594 kubelet[3179]: I0307 01:15:55.233289 3179 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-1070eafa86" Mar 7 01:15:55.233757 kubelet[3179]: I0307 01:15:55.233377 3179 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-1070eafa86" Mar 7 01:15:55.252176 kubelet[3179]: I0307 01:15:55.252141 3179 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 7 01:15:55.252176 kubelet[3179]: I0307 01:15:55.252141 3179 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 7 01:15:55.252424 kubelet[3179]: E0307 01:15:55.252234 3179 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-1070eafa86\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-1070eafa86" Mar 7 01:15:55.252424 kubelet[3179]: E0307 01:15:55.252338 3179 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-1070eafa86\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-1070eafa86" Mar 7 01:15:55.252709 kubelet[3179]: I0307 01:15:55.252574 3179 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 7 01:15:55.252709 kubelet[3179]: E0307 01:15:55.252654 3179 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-1070eafa86\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-1070eafa86" Mar 7 01:15:55.295496 kubelet[3179]: I0307 01:15:55.295364 3179 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-1070eafa86" Mar 7 01:15:55.306020 kubelet[3179]: I0307 01:15:55.305981 3179 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-1070eafa86" Mar 7 01:15:55.306223 kubelet[3179]: I0307 01:15:55.306072 3179 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-1070eafa86" Mar 7 01:15:55.327429 kubelet[3179]: I0307 01:15:55.327390 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7a2ff459ee11aab5503a9cd16fc163d8-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-1070eafa86\" (UID: \"7a2ff459ee11aab5503a9cd16fc163d8\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-1070eafa86" Mar 7 01:15:55.327429 kubelet[3179]: I0307 01:15:55.327429 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55a1045a995fac32db4ce39dd41e51e9-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-1070eafa86\" (UID: \"55a1045a995fac32db4ce39dd41e51e9\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-1070eafa86" Mar 7 01:15:55.327429 kubelet[3179]: I0307 01:15:55.327451 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55a1045a995fac32db4ce39dd41e51e9-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-1070eafa86\" (UID: \"55a1045a995fac32db4ce39dd41e51e9\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-1070eafa86" Mar 7 01:15:55.327840 kubelet[3179]: I0307 01:15:55.327492 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/29ade1129e920079b467063aa8400f4a-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-1070eafa86\" (UID: \"29ade1129e920079b467063aa8400f4a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-1070eafa86" Mar 7 01:15:55.327840 kubelet[3179]: I0307 01:15:55.327512 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/29ade1129e920079b467063aa8400f4a-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-1070eafa86\" (UID: \"29ade1129e920079b467063aa8400f4a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-1070eafa86" Mar 7 01:15:55.327840 kubelet[3179]: I0307 01:15:55.327549 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/29ade1129e920079b467063aa8400f4a-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-1070eafa86\" (UID: \"29ade1129e920079b467063aa8400f4a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-1070eafa86" Mar 7 01:15:55.327840 kubelet[3179]: I0307 01:15:55.327594 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/29ade1129e920079b467063aa8400f4a-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-1070eafa86\" (UID: \"29ade1129e920079b467063aa8400f4a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-1070eafa86" Mar 7 01:15:55.327840 kubelet[3179]: I0307 01:15:55.327622 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/29ade1129e920079b467063aa8400f4a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-1070eafa86\" (UID: \"29ade1129e920079b467063aa8400f4a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-1070eafa86" Mar 7 01:15:55.327989 kubelet[3179]: I0307 01:15:55.327651 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55a1045a995fac32db4ce39dd41e51e9-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-1070eafa86\" (UID: \"55a1045a995fac32db4ce39dd41e51e9\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-1070eafa86" Mar 7 01:15:56.093999 kubelet[3179]: I0307 01:15:56.093746 3179 apiserver.go:52] "Watching apiserver" Mar 7 01:15:56.123254 kubelet[3179]: I0307 01:15:56.123171 3179 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 7 01:15:56.165470 kubelet[3179]: I0307 01:15:56.165432 3179 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-1070eafa86" Mar 7 01:15:56.166658 kubelet[3179]: I0307 01:15:56.166214 3179 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-1070eafa86" Mar 7 01:15:56.180976 kubelet[3179]: I0307 01:15:56.180784 3179 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 7 01:15:56.180976 kubelet[3179]: E0307 01:15:56.180858 3179 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-1070eafa86\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-1070eafa86" Mar 7 01:15:56.183122 kubelet[3179]: I0307 01:15:56.181626 3179 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 7 01:15:56.183122 kubelet[3179]: E0307 01:15:56.181667 3179 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-1070eafa86\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-1070eafa86" Mar 7 01:15:56.209445 kubelet[3179]: I0307 01:15:56.209216 3179 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-1070eafa86" podStartSLOduration=4.209197102 podStartE2EDuration="4.209197102s" podCreationTimestamp="2026-03-07 01:15:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:15:56.195932624 +0000 UTC m=+1.161102657" watchObservedRunningTime="2026-03-07 01:15:56.209197102 +0000 UTC m=+1.174367235" Mar 7 01:15:56.222974 kubelet[3179]: I0307 01:15:56.222751 3179 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-1070eafa86" podStartSLOduration=4.222735083 podStartE2EDuration="4.222735083s" podCreationTimestamp="2026-03-07 01:15:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:15:56.209923111 +0000 UTC m=+1.175093144" watchObservedRunningTime="2026-03-07 01:15:56.222735083 +0000 UTC m=+1.187905216" Mar 7 01:15:56.222974 kubelet[3179]: I0307 01:15:56.222846 3179 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-1070eafa86" podStartSLOduration=4.222842585 podStartE2EDuration="4.222842585s" podCreationTimestamp="2026-03-07 01:15:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:15:56.22252678 +0000 UTC m=+1.187696813" watchObservedRunningTime="2026-03-07 01:15:56.222842585 +0000 UTC m=+1.188012618" Mar 7 01:16:01.917382 kubelet[3179]: I0307 01:16:01.917344 3179 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 7 01:16:01.918215 kubelet[3179]: I0307 01:16:01.917976 3179 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 7 01:16:01.918275 containerd[1718]: time="2026-03-07T01:16:01.917743466Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 7 01:16:02.921141 systemd[1]: Created slice kubepods-besteffort-poda93b5619_2e22_48a0_8335_2d3ea75fe241.slice - libcontainer container kubepods-besteffort-poda93b5619_2e22_48a0_8335_2d3ea75fe241.slice. Mar 7 01:16:02.979706 kubelet[3179]: I0307 01:16:02.979572 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a93b5619-2e22-48a0-8335-2d3ea75fe241-lib-modules\") pod \"kube-proxy-d4sx5\" (UID: \"a93b5619-2e22-48a0-8335-2d3ea75fe241\") " pod="kube-system/kube-proxy-d4sx5" Mar 7 01:16:02.979706 kubelet[3179]: I0307 01:16:02.979608 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2756\" (UniqueName: \"kubernetes.io/projected/a93b5619-2e22-48a0-8335-2d3ea75fe241-kube-api-access-b2756\") pod \"kube-proxy-d4sx5\" (UID: \"a93b5619-2e22-48a0-8335-2d3ea75fe241\") " pod="kube-system/kube-proxy-d4sx5" Mar 7 01:16:02.979706 kubelet[3179]: I0307 01:16:02.979627 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a93b5619-2e22-48a0-8335-2d3ea75fe241-kube-proxy\") pod \"kube-proxy-d4sx5\" (UID: \"a93b5619-2e22-48a0-8335-2d3ea75fe241\") " pod="kube-system/kube-proxy-d4sx5" Mar 7 01:16:02.979706 kubelet[3179]: I0307 01:16:02.979651 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a93b5619-2e22-48a0-8335-2d3ea75fe241-xtables-lock\") pod \"kube-proxy-d4sx5\" (UID: \"a93b5619-2e22-48a0-8335-2d3ea75fe241\") " pod="kube-system/kube-proxy-d4sx5" Mar 7 01:16:03.145245 systemd[1]: Created slice kubepods-besteffort-pod6fc24bb0_27d8_46a5_9f49_30fdd3f3537e.slice - libcontainer container kubepods-besteffort-pod6fc24bb0_27d8_46a5_9f49_30fdd3f3537e.slice. Mar 7 01:16:03.181564 kubelet[3179]: I0307 01:16:03.181395 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6fc24bb0-27d8-46a5-9f49-30fdd3f3537e-var-lib-calico\") pod \"tigera-operator-5588576f44-p2xpp\" (UID: \"6fc24bb0-27d8-46a5-9f49-30fdd3f3537e\") " pod="tigera-operator/tigera-operator-5588576f44-p2xpp" Mar 7 01:16:03.181564 kubelet[3179]: I0307 01:16:03.181450 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8dfm\" (UniqueName: \"kubernetes.io/projected/6fc24bb0-27d8-46a5-9f49-30fdd3f3537e-kube-api-access-s8dfm\") pod \"tigera-operator-5588576f44-p2xpp\" (UID: \"6fc24bb0-27d8-46a5-9f49-30fdd3f3537e\") " pod="tigera-operator/tigera-operator-5588576f44-p2xpp" Mar 7 01:16:03.235555 containerd[1718]: time="2026-03-07T01:16:03.235513601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d4sx5,Uid:a93b5619-2e22-48a0-8335-2d3ea75fe241,Namespace:kube-system,Attempt:0,}" Mar 7 01:16:03.279320 containerd[1718]: time="2026-03-07T01:16:03.278885080Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:16:03.279320 containerd[1718]: time="2026-03-07T01:16:03.278958381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:16:03.279320 containerd[1718]: time="2026-03-07T01:16:03.278978582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:03.279320 containerd[1718]: time="2026-03-07T01:16:03.279052783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:03.315266 systemd[1]: Started cri-containerd-57dba37fc36c6297da3be5dc62012e8e54c221aa84d09d913fea17183cfeac0e.scope - libcontainer container 57dba37fc36c6297da3be5dc62012e8e54c221aa84d09d913fea17183cfeac0e. Mar 7 01:16:03.337706 containerd[1718]: time="2026-03-07T01:16:03.337546099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d4sx5,Uid:a93b5619-2e22-48a0-8335-2d3ea75fe241,Namespace:kube-system,Attempt:0,} returns sandbox id \"57dba37fc36c6297da3be5dc62012e8e54c221aa84d09d913fea17183cfeac0e\"" Mar 7 01:16:03.347975 containerd[1718]: time="2026-03-07T01:16:03.347922461Z" level=info msg="CreateContainer within sandbox \"57dba37fc36c6297da3be5dc62012e8e54c221aa84d09d913fea17183cfeac0e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 7 01:16:03.382540 containerd[1718]: time="2026-03-07T01:16:03.382492803Z" level=info msg="CreateContainer within sandbox \"57dba37fc36c6297da3be5dc62012e8e54c221aa84d09d913fea17183cfeac0e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"89a7939fd289d58b98fff9f8f4e7b1208cbcc1b26e94234146e71d16aa8109c3\"" Mar 7 01:16:03.383255 containerd[1718]: time="2026-03-07T01:16:03.383220314Z" level=info msg="StartContainer for \"89a7939fd289d58b98fff9f8f4e7b1208cbcc1b26e94234146e71d16aa8109c3\"" Mar 7 01:16:03.417275 systemd[1]: Started cri-containerd-89a7939fd289d58b98fff9f8f4e7b1208cbcc1b26e94234146e71d16aa8109c3.scope - libcontainer container 89a7939fd289d58b98fff9f8f4e7b1208cbcc1b26e94234146e71d16aa8109c3. Mar 7 01:16:03.453075 containerd[1718]: time="2026-03-07T01:16:03.451318480Z" level=info msg="StartContainer for \"89a7939fd289d58b98fff9f8f4e7b1208cbcc1b26e94234146e71d16aa8109c3\" returns successfully" Mar 7 01:16:03.456200 containerd[1718]: time="2026-03-07T01:16:03.455785750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-p2xpp,Uid:6fc24bb0-27d8-46a5-9f49-30fdd3f3537e,Namespace:tigera-operator,Attempt:0,}" Mar 7 01:16:03.503249 containerd[1718]: time="2026-03-07T01:16:03.503062891Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:16:03.504151 containerd[1718]: time="2026-03-07T01:16:03.503162692Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:16:03.504151 containerd[1718]: time="2026-03-07T01:16:03.503185993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:03.504151 containerd[1718]: time="2026-03-07T01:16:03.503357495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:03.531324 systemd[1]: Started cri-containerd-24bebec699176653bdf4a6e960a34945d70347a6e0213c6b63afca1c88aac7c3.scope - libcontainer container 24bebec699176653bdf4a6e960a34945d70347a6e0213c6b63afca1c88aac7c3. Mar 7 01:16:03.576996 containerd[1718]: time="2026-03-07T01:16:03.576955348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-p2xpp,Uid:6fc24bb0-27d8-46a5-9f49-30fdd3f3537e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"24bebec699176653bdf4a6e960a34945d70347a6e0213c6b63afca1c88aac7c3\"" Mar 7 01:16:03.579782 containerd[1718]: time="2026-03-07T01:16:03.579310385Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 7 01:16:04.880417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount819622835.mount: Deactivated successfully. Mar 7 01:16:05.146482 kubelet[3179]: I0307 01:16:05.146322 3179 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-d4sx5" podStartSLOduration=3.146302822 podStartE2EDuration="3.146302822s" podCreationTimestamp="2026-03-07 01:16:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:16:04.207251618 +0000 UTC m=+9.172421751" watchObservedRunningTime="2026-03-07 01:16:05.146302822 +0000 UTC m=+10.111472955" Mar 7 01:16:06.267311 containerd[1718]: time="2026-03-07T01:16:06.267255675Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:06.270049 containerd[1718]: time="2026-03-07T01:16:06.269983818Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 7 01:16:06.273165 containerd[1718]: time="2026-03-07T01:16:06.273116267Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:06.277806 containerd[1718]: time="2026-03-07T01:16:06.277755440Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:06.279159 containerd[1718]: time="2026-03-07T01:16:06.278458451Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.699110665s" Mar 7 01:16:06.279159 containerd[1718]: time="2026-03-07T01:16:06.278493551Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 7 01:16:06.285355 containerd[1718]: time="2026-03-07T01:16:06.285321058Z" level=info msg="CreateContainer within sandbox \"24bebec699176653bdf4a6e960a34945d70347a6e0213c6b63afca1c88aac7c3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 7 01:16:06.314858 containerd[1718]: time="2026-03-07T01:16:06.314803620Z" level=info msg="CreateContainer within sandbox \"24bebec699176653bdf4a6e960a34945d70347a6e0213c6b63afca1c88aac7c3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"84f4c9c275c5dd06728d6c785fe267cc777edc6f09d2ee4a08610c012125869e\"" Mar 7 01:16:06.315613 containerd[1718]: time="2026-03-07T01:16:06.315465930Z" level=info msg="StartContainer for \"84f4c9c275c5dd06728d6c785fe267cc777edc6f09d2ee4a08610c012125869e\"" Mar 7 01:16:06.352275 systemd[1]: Started cri-containerd-84f4c9c275c5dd06728d6c785fe267cc777edc6f09d2ee4a08610c012125869e.scope - libcontainer container 84f4c9c275c5dd06728d6c785fe267cc777edc6f09d2ee4a08610c012125869e. Mar 7 01:16:06.381598 containerd[1718]: time="2026-03-07T01:16:06.381552965Z" level=info msg="StartContainer for \"84f4c9c275c5dd06728d6c785fe267cc777edc6f09d2ee4a08610c012125869e\" returns successfully" Mar 7 01:16:07.203112 kubelet[3179]: I0307 01:16:07.202567 3179 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-p2xpp" podStartSLOduration=1.501622827 podStartE2EDuration="4.202549621s" podCreationTimestamp="2026-03-07 01:16:03 +0000 UTC" firstStartedPulling="2026-03-07 01:16:03.578332469 +0000 UTC m=+8.543502602" lastFinishedPulling="2026-03-07 01:16:06.279259263 +0000 UTC m=+11.244429396" observedRunningTime="2026-03-07 01:16:07.20185431 +0000 UTC m=+12.167024443" watchObservedRunningTime="2026-03-07 01:16:07.202549621 +0000 UTC m=+12.167719754" Mar 7 01:16:12.678336 sudo[2238]: pam_unix(sudo:session): session closed for user root Mar 7 01:16:12.779208 sshd[2235]: pam_unix(sshd:session): session closed for user core Mar 7 01:16:12.785017 systemd[1]: sshd@6-10.200.8.30:22-10.200.16.10:55706.service: Deactivated successfully. Mar 7 01:16:12.788657 systemd[1]: session-9.scope: Deactivated successfully. Mar 7 01:16:12.789007 systemd[1]: session-9.scope: Consumed 5.061s CPU time, 157.6M memory peak, 0B memory swap peak. Mar 7 01:16:12.791777 systemd-logind[1702]: Session 9 logged out. Waiting for processes to exit. Mar 7 01:16:12.793715 systemd-logind[1702]: Removed session 9. Mar 7 01:16:16.115573 systemd[1]: Created slice kubepods-besteffort-pod55b2f64d_951b_4da9_9000_e9a7c0481649.slice - libcontainer container kubepods-besteffort-pod55b2f64d_951b_4da9_9000_e9a7c0481649.slice. Mar 7 01:16:16.163884 kubelet[3179]: I0307 01:16:16.163773 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwrqq\" (UniqueName: \"kubernetes.io/projected/55b2f64d-951b-4da9-9000-e9a7c0481649-kube-api-access-qwrqq\") pod \"calico-typha-5c949ff996-wrscl\" (UID: \"55b2f64d-951b-4da9-9000-e9a7c0481649\") " pod="calico-system/calico-typha-5c949ff996-wrscl" Mar 7 01:16:16.163884 kubelet[3179]: I0307 01:16:16.163814 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55b2f64d-951b-4da9-9000-e9a7c0481649-tigera-ca-bundle\") pod \"calico-typha-5c949ff996-wrscl\" (UID: \"55b2f64d-951b-4da9-9000-e9a7c0481649\") " pod="calico-system/calico-typha-5c949ff996-wrscl" Mar 7 01:16:16.163884 kubelet[3179]: I0307 01:16:16.163872 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/55b2f64d-951b-4da9-9000-e9a7c0481649-typha-certs\") pod \"calico-typha-5c949ff996-wrscl\" (UID: \"55b2f64d-951b-4da9-9000-e9a7c0481649\") " pod="calico-system/calico-typha-5c949ff996-wrscl" Mar 7 01:16:16.219014 systemd[1]: Created slice kubepods-besteffort-podfc9e8a85_0855_4298_836c_cd9bfc65661a.slice - libcontainer container kubepods-besteffort-podfc9e8a85_0855_4298_836c_cd9bfc65661a.slice. Mar 7 01:16:16.265196 kubelet[3179]: I0307 01:16:16.264766 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fc9e8a85-0855-4298-836c-cd9bfc65661a-var-lib-calico\") pod \"calico-node-7zgkp\" (UID: \"fc9e8a85-0855-4298-836c-cd9bfc65661a\") " pod="calico-system/calico-node-7zgkp" Mar 7 01:16:16.266263 kubelet[3179]: I0307 01:16:16.265434 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/fc9e8a85-0855-4298-836c-cd9bfc65661a-bpffs\") pod \"calico-node-7zgkp\" (UID: \"fc9e8a85-0855-4298-836c-cd9bfc65661a\") " pod="calico-system/calico-node-7zgkp" Mar 7 01:16:16.266263 kubelet[3179]: I0307 01:16:16.265473 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/fc9e8a85-0855-4298-836c-cd9bfc65661a-sys-fs\") pod \"calico-node-7zgkp\" (UID: \"fc9e8a85-0855-4298-836c-cd9bfc65661a\") " pod="calico-system/calico-node-7zgkp" Mar 7 01:16:16.266263 kubelet[3179]: I0307 01:16:16.265510 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc9e8a85-0855-4298-836c-cd9bfc65661a-xtables-lock\") pod \"calico-node-7zgkp\" (UID: \"fc9e8a85-0855-4298-836c-cd9bfc65661a\") " pod="calico-system/calico-node-7zgkp" Mar 7 01:16:16.266263 kubelet[3179]: I0307 01:16:16.265550 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fc9e8a85-0855-4298-836c-cd9bfc65661a-flexvol-driver-host\") pod \"calico-node-7zgkp\" (UID: \"fc9e8a85-0855-4298-836c-cd9bfc65661a\") " pod="calico-system/calico-node-7zgkp" Mar 7 01:16:16.266263 kubelet[3179]: I0307 01:16:16.265575 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fc9e8a85-0855-4298-836c-cd9bfc65661a-node-certs\") pod \"calico-node-7zgkp\" (UID: \"fc9e8a85-0855-4298-836c-cd9bfc65661a\") " pod="calico-system/calico-node-7zgkp" Mar 7 01:16:16.266542 kubelet[3179]: I0307 01:16:16.265593 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc9e8a85-0855-4298-836c-cd9bfc65661a-tigera-ca-bundle\") pod \"calico-node-7zgkp\" (UID: \"fc9e8a85-0855-4298-836c-cd9bfc65661a\") " pod="calico-system/calico-node-7zgkp" Mar 7 01:16:16.266542 kubelet[3179]: I0307 01:16:16.265615 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frjd9\" (UniqueName: \"kubernetes.io/projected/fc9e8a85-0855-4298-836c-cd9bfc65661a-kube-api-access-frjd9\") pod \"calico-node-7zgkp\" (UID: \"fc9e8a85-0855-4298-836c-cd9bfc65661a\") " pod="calico-system/calico-node-7zgkp" Mar 7 01:16:16.266542 kubelet[3179]: I0307 01:16:16.265634 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fc9e8a85-0855-4298-836c-cd9bfc65661a-cni-bin-dir\") pod \"calico-node-7zgkp\" (UID: \"fc9e8a85-0855-4298-836c-cd9bfc65661a\") " pod="calico-system/calico-node-7zgkp" Mar 7 01:16:16.266542 kubelet[3179]: I0307 01:16:16.265651 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fc9e8a85-0855-4298-836c-cd9bfc65661a-cni-log-dir\") pod \"calico-node-7zgkp\" (UID: \"fc9e8a85-0855-4298-836c-cd9bfc65661a\") " pod="calico-system/calico-node-7zgkp" Mar 7 01:16:16.266542 kubelet[3179]: I0307 01:16:16.265680 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fc9e8a85-0855-4298-836c-cd9bfc65661a-cni-net-dir\") pod \"calico-node-7zgkp\" (UID: \"fc9e8a85-0855-4298-836c-cd9bfc65661a\") " pod="calico-system/calico-node-7zgkp" Mar 7 01:16:16.266830 kubelet[3179]: I0307 01:16:16.265701 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/fc9e8a85-0855-4298-836c-cd9bfc65661a-nodeproc\") pod \"calico-node-7zgkp\" (UID: \"fc9e8a85-0855-4298-836c-cd9bfc65661a\") " pod="calico-system/calico-node-7zgkp" Mar 7 01:16:16.266830 kubelet[3179]: I0307 01:16:16.265718 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fc9e8a85-0855-4298-836c-cd9bfc65661a-policysync\") pod \"calico-node-7zgkp\" (UID: \"fc9e8a85-0855-4298-836c-cd9bfc65661a\") " pod="calico-system/calico-node-7zgkp" Mar 7 01:16:16.266830 kubelet[3179]: I0307 01:16:16.265757 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc9e8a85-0855-4298-836c-cd9bfc65661a-lib-modules\") pod \"calico-node-7zgkp\" (UID: \"fc9e8a85-0855-4298-836c-cd9bfc65661a\") " pod="calico-system/calico-node-7zgkp" Mar 7 01:16:16.266830 kubelet[3179]: I0307 01:16:16.265777 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fc9e8a85-0855-4298-836c-cd9bfc65661a-var-run-calico\") pod \"calico-node-7zgkp\" (UID: \"fc9e8a85-0855-4298-836c-cd9bfc65661a\") " pod="calico-system/calico-node-7zgkp" Mar 7 01:16:16.343872 kubelet[3179]: E0307 01:16:16.343823 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gkvkn" podUID="28e408df-d765-4f81-83a6-862639ee589c" Mar 7 01:16:16.367002 kubelet[3179]: I0307 01:16:16.366297 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/28e408df-d765-4f81-83a6-862639ee589c-registration-dir\") pod \"csi-node-driver-gkvkn\" (UID: \"28e408df-d765-4f81-83a6-862639ee589c\") " pod="calico-system/csi-node-driver-gkvkn" Mar 7 01:16:16.367002 kubelet[3179]: I0307 01:16:16.366343 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/28e408df-d765-4f81-83a6-862639ee589c-varrun\") pod \"csi-node-driver-gkvkn\" (UID: \"28e408df-d765-4f81-83a6-862639ee589c\") " pod="calico-system/csi-node-driver-gkvkn" Mar 7 01:16:16.367002 kubelet[3179]: I0307 01:16:16.366452 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/28e408df-d765-4f81-83a6-862639ee589c-kubelet-dir\") pod \"csi-node-driver-gkvkn\" (UID: \"28e408df-d765-4f81-83a6-862639ee589c\") " pod="calico-system/csi-node-driver-gkvkn" Mar 7 01:16:16.367002 kubelet[3179]: I0307 01:16:16.366474 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/28e408df-d765-4f81-83a6-862639ee589c-socket-dir\") pod \"csi-node-driver-gkvkn\" (UID: \"28e408df-d765-4f81-83a6-862639ee589c\") " pod="calico-system/csi-node-driver-gkvkn" Mar 7 01:16:16.367002 kubelet[3179]: I0307 01:16:16.366536 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx7xz\" (UniqueName: \"kubernetes.io/projected/28e408df-d765-4f81-83a6-862639ee589c-kube-api-access-sx7xz\") pod \"csi-node-driver-gkvkn\" (UID: \"28e408df-d765-4f81-83a6-862639ee589c\") " pod="calico-system/csi-node-driver-gkvkn" Mar 7 01:16:16.380225 kubelet[3179]: E0307 01:16:16.380135 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:16.380225 kubelet[3179]: W0307 01:16:16.380160 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:16.380225 kubelet[3179]: E0307 01:16:16.380188 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:16.406132 kubelet[3179]: E0307 01:16:16.404333 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:16.406132 kubelet[3179]: W0307 01:16:16.404475 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:16.406132 kubelet[3179]: E0307 01:16:16.404651 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:16.433003 containerd[1718]: time="2026-03-07T01:16:16.432901311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c949ff996-wrscl,Uid:55b2f64d-951b-4da9-9000-e9a7c0481649,Namespace:calico-system,Attempt:0,}" Mar 7 01:16:16.468153 kubelet[3179]: E0307 01:16:16.468062 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:16.468290 kubelet[3179]: W0307 01:16:16.468159 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:16.468290 kubelet[3179]: E0307 01:16:16.468201 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:16.468718 kubelet[3179]: E0307 01:16:16.468523 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:16.468718 kubelet[3179]: W0307 01:16:16.468539 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:16.468718 kubelet[3179]: E0307 01:16:16.468555 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:16.470505 kubelet[3179]: E0307 01:16:16.469074 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:16.470505 kubelet[3179]: W0307 01:16:16.469090 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:16.470505 kubelet[3179]: E0307 01:16:16.469127 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:16.470505 kubelet[3179]: E0307 01:16:16.469635 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:16.470505 kubelet[3179]: W0307 01:16:16.469656 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:16.470505 kubelet[3179]: E0307 01:16:16.469669 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:16.470505 kubelet[3179]: E0307 01:16:16.469910 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:16.470505 kubelet[3179]: W0307 01:16:16.469920 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:16.470505 kubelet[3179]: E0307 01:16:16.469933 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:16.470505 kubelet[3179]: E0307 01:16:16.470180 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:16.470996 kubelet[3179]: W0307 01:16:16.470193 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:16.470996 kubelet[3179]: E0307 01:16:16.470206 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:16.470996 kubelet[3179]: E0307 01:16:16.470477 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:16.470996 kubelet[3179]: W0307 01:16:16.470488 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:16.470996 kubelet[3179]: E0307 01:16:16.470500 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:16.470996 kubelet[3179]: E0307 01:16:16.470787 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:16.470996 kubelet[3179]: W0307 01:16:16.470797 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:16.470996 kubelet[3179]: E0307 01:16:16.470810 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:16.471776 kubelet[3179]: E0307 01:16:16.471754 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:16.471776 kubelet[3179]: W0307 01:16:16.471773 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:16.471908 kubelet[3179]: E0307 01:16:16.471787 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:16.472752 kubelet[3179]: E0307 01:16:16.472077 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:16.472752 kubelet[3179]: W0307 01:16:16.472090 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:16.472752 kubelet[3179]: E0307 01:16:16.472149 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:16.473258 kubelet[3179]: E0307 01:16:16.473208 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:16.473258 kubelet[3179]: W0307 01:16:16.473254 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:16.473435 kubelet[3179]: E0307 01:16:16.473270 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:16.474249 kubelet[3179]: E0307 01:16:16.473577 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:16.474249 kubelet[3179]: W0307 01:16:16.473591 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:16.474249 kubelet[3179]: E0307 01:16:16.473604 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:16.474249 kubelet[3179]: E0307 01:16:16.473871 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:16.474249 kubelet[3179]: W0307 01:16:16.473881 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:16.474249 kubelet[3179]: E0307 01:16:16.473917 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:16.474941 kubelet[3179]: E0307 01:16:16.474769 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:16.474941 kubelet[3179]: W0307 01:16:16.474786 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:16.474941 kubelet[3179]: E0307 01:16:16.474804 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:16.475505 kubelet[3179]: E0307 01:16:16.475485 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:16.475505 kubelet[3179]: W0307 01:16:16.475504 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:16.475773 kubelet[3179]: E0307 01:16:16.475518 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:16.475773 kubelet[3179]: E0307 01:16:16.475765 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:16.475865 kubelet[3179]: W0307 01:16:16.475776 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:16.475865 kubelet[3179]: E0307 01:16:16.475789 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:16.476737 kubelet[3179]: E0307 01:16:16.476423 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:16.476737 kubelet[3179]: W0307 01:16:16.476459 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:16.476737 kubelet[3179]: E0307 01:16:16.476475 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:16.476938 kubelet[3179]: E0307 01:16:16.476905 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:16.476938 kubelet[3179]: W0307 01:16:16.476924 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:16.476938 kubelet[3179]: E0307 01:16:16.476937 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:16.477831 kubelet[3179]: E0307 01:16:16.477806 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:16.477831 kubelet[3179]: W0307 01:16:16.477828 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:16.477971 kubelet[3179]: E0307 01:16:16.477842 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:16.478453 kubelet[3179]: E0307 01:16:16.478360 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:16.478453 kubelet[3179]: W0307 01:16:16.478376 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:16.478453 kubelet[3179]: E0307 01:16:16.478392 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:16.479621 kubelet[3179]: E0307 01:16:16.478660 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:16.479621 kubelet[3179]: W0307 01:16:16.478674 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:16.479621 kubelet[3179]: E0307 01:16:16.478688 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:16.479621 kubelet[3179]: E0307 01:16:16.479146 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:16.479621 kubelet[3179]: W0307 01:16:16.479180 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:16.479621 kubelet[3179]: E0307 01:16:16.479195 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:16.479621 kubelet[3179]: E0307 01:16:16.479459 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:16.479621 kubelet[3179]: W0307 01:16:16.479471 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:16.479621 kubelet[3179]: E0307 01:16:16.479484 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:16.480017 kubelet[3179]: E0307 01:16:16.479804 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:16.480017 kubelet[3179]: W0307 01:16:16.479816 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:16.480017 kubelet[3179]: E0307 01:16:16.479858 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:16.482149 kubelet[3179]: E0307 01:16:16.480391 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:16.482149 kubelet[3179]: W0307 01:16:16.480405 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:16.482149 kubelet[3179]: E0307 01:16:16.480435 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:16.485167 containerd[1718]: time="2026-03-07T01:16:16.485076907Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:16:16.486629 containerd[1718]: time="2026-03-07T01:16:16.486584633Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:16:16.486763 containerd[1718]: time="2026-03-07T01:16:16.486737135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:16.486943 containerd[1718]: time="2026-03-07T01:16:16.486915639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:16.491193 kubelet[3179]: E0307 01:16:16.491171 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:16.491193 kubelet[3179]: W0307 01:16:16.491189 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:16.491343 kubelet[3179]: E0307 01:16:16.491207 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:16.511298 systemd[1]: Started cri-containerd-44fe292e673bf1af57e63c9d0df258019958e4929e8ab2f5c770695574a8fd75.scope - libcontainer container 44fe292e673bf1af57e63c9d0df258019958e4929e8ab2f5c770695574a8fd75. Mar 7 01:16:16.529806 containerd[1718]: time="2026-03-07T01:16:16.528747557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7zgkp,Uid:fc9e8a85-0855-4298-836c-cd9bfc65661a,Namespace:calico-system,Attempt:0,}" Mar 7 01:16:16.555551 containerd[1718]: time="2026-03-07T01:16:16.555508016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c949ff996-wrscl,Uid:55b2f64d-951b-4da9-9000-e9a7c0481649,Namespace:calico-system,Attempt:0,} returns sandbox id \"44fe292e673bf1af57e63c9d0df258019958e4929e8ab2f5c770695574a8fd75\"" Mar 7 01:16:16.559016 containerd[1718]: time="2026-03-07T01:16:16.558809473Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 7 01:16:16.580055 containerd[1718]: time="2026-03-07T01:16:16.579706332Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:16:16.580055 containerd[1718]: time="2026-03-07T01:16:16.579760933Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:16:16.580055 containerd[1718]: time="2026-03-07T01:16:16.579795033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:16.580055 containerd[1718]: time="2026-03-07T01:16:16.579884535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:16.600339 systemd[1]: Started cri-containerd-6d7eb0120da77cc9ec5d03da1fffaed501a98ecb29231a0a2d7f0f399d002475.scope - libcontainer container 6d7eb0120da77cc9ec5d03da1fffaed501a98ecb29231a0a2d7f0f399d002475. Mar 7 01:16:16.624197 containerd[1718]: time="2026-03-07T01:16:16.623306381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7zgkp,Uid:fc9e8a85-0855-4298-836c-cd9bfc65661a,Namespace:calico-system,Attempt:0,} returns sandbox id \"6d7eb0120da77cc9ec5d03da1fffaed501a98ecb29231a0a2d7f0f399d002475\"" Mar 7 01:16:17.914203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1881686692.mount: Deactivated successfully. Mar 7 01:16:18.132964 kubelet[3179]: E0307 01:16:18.132924 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gkvkn" podUID="28e408df-d765-4f81-83a6-862639ee589c" Mar 7 01:16:19.047217 containerd[1718]: time="2026-03-07T01:16:19.047161102Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:19.050362 containerd[1718]: time="2026-03-07T01:16:19.050272956Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 7 01:16:19.054553 containerd[1718]: time="2026-03-07T01:16:19.053546812Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:19.058422 containerd[1718]: time="2026-03-07T01:16:19.058389095Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:19.059236 containerd[1718]: time="2026-03-07T01:16:19.059200809Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.499490721s" Mar 7 01:16:19.059331 containerd[1718]: time="2026-03-07T01:16:19.059239810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 7 01:16:19.060538 containerd[1718]: time="2026-03-07T01:16:19.060512432Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 7 01:16:19.086865 containerd[1718]: time="2026-03-07T01:16:19.086820583Z" level=info msg="CreateContainer within sandbox \"44fe292e673bf1af57e63c9d0df258019958e4929e8ab2f5c770695574a8fd75\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 7 01:16:19.120404 containerd[1718]: time="2026-03-07T01:16:19.120359659Z" level=info msg="CreateContainer within sandbox \"44fe292e673bf1af57e63c9d0df258019958e4929e8ab2f5c770695574a8fd75\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"36c5e9f8faf8e68e3c083dac720a584c4863f57b0ed23de92314b9e1e593bbd8\"" Mar 7 01:16:19.121236 containerd[1718]: time="2026-03-07T01:16:19.121035271Z" level=info msg="StartContainer for \"36c5e9f8faf8e68e3c083dac720a584c4863f57b0ed23de92314b9e1e593bbd8\"" Mar 7 01:16:19.159312 systemd[1]: Started cri-containerd-36c5e9f8faf8e68e3c083dac720a584c4863f57b0ed23de92314b9e1e593bbd8.scope - libcontainer container 36c5e9f8faf8e68e3c083dac720a584c4863f57b0ed23de92314b9e1e593bbd8. Mar 7 01:16:19.209639 containerd[1718]: time="2026-03-07T01:16:19.208303370Z" level=info msg="StartContainer for \"36c5e9f8faf8e68e3c083dac720a584c4863f57b0ed23de92314b9e1e593bbd8\" returns successfully" Mar 7 01:16:19.269361 kubelet[3179]: E0307 01:16:19.268839 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.269361 kubelet[3179]: W0307 01:16:19.268875 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.269361 kubelet[3179]: E0307 01:16:19.268900 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.270310 kubelet[3179]: E0307 01:16:19.270136 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.270310 kubelet[3179]: W0307 01:16:19.270155 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.270310 kubelet[3179]: E0307 01:16:19.270175 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.270805 kubelet[3179]: E0307 01:16:19.270466 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.270805 kubelet[3179]: W0307 01:16:19.270478 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.270805 kubelet[3179]: E0307 01:16:19.270492 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.271961 kubelet[3179]: E0307 01:16:19.271795 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.271961 kubelet[3179]: W0307 01:16:19.271825 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.271961 kubelet[3179]: E0307 01:16:19.271840 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.272595 kubelet[3179]: E0307 01:16:19.272457 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.272595 kubelet[3179]: W0307 01:16:19.272472 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.272595 kubelet[3179]: E0307 01:16:19.272485 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.273207 kubelet[3179]: E0307 01:16:19.273075 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.273207 kubelet[3179]: W0307 01:16:19.273087 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.273479 kubelet[3179]: E0307 01:16:19.273346 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.273733 kubelet[3179]: E0307 01:16:19.273712 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.273938 kubelet[3179]: W0307 01:16:19.273820 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.273938 kubelet[3179]: E0307 01:16:19.273840 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.274281 kubelet[3179]: E0307 01:16:19.274267 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.274475 kubelet[3179]: W0307 01:16:19.274350 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.274475 kubelet[3179]: E0307 01:16:19.274368 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.275119 kubelet[3179]: E0307 01:16:19.274996 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.275119 kubelet[3179]: W0307 01:16:19.275011 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.275119 kubelet[3179]: E0307 01:16:19.275025 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.275920 kubelet[3179]: E0307 01:16:19.275783 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.275920 kubelet[3179]: W0307 01:16:19.275800 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.275920 kubelet[3179]: E0307 01:16:19.275815 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.276277 kubelet[3179]: E0307 01:16:19.276190 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.276277 kubelet[3179]: W0307 01:16:19.276201 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.276277 kubelet[3179]: E0307 01:16:19.276214 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.276942 kubelet[3179]: E0307 01:16:19.276926 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.277388 kubelet[3179]: W0307 01:16:19.277008 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.277388 kubelet[3179]: E0307 01:16:19.277028 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.278055 kubelet[3179]: E0307 01:16:19.278035 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.278376 kubelet[3179]: W0307 01:16:19.278075 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.278376 kubelet[3179]: E0307 01:16:19.278091 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.278470 kubelet[3179]: E0307 01:16:19.278430 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.278470 kubelet[3179]: W0307 01:16:19.278442 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.278470 kubelet[3179]: E0307 01:16:19.278455 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.278705 kubelet[3179]: E0307 01:16:19.278687 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.278705 kubelet[3179]: W0307 01:16:19.278703 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.278800 kubelet[3179]: E0307 01:16:19.278717 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.293292 kubelet[3179]: E0307 01:16:19.293264 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.293697 kubelet[3179]: W0307 01:16:19.293533 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.293697 kubelet[3179]: E0307 01:16:19.293565 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.295846 kubelet[3179]: E0307 01:16:19.295608 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.295846 kubelet[3179]: W0307 01:16:19.295624 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.295846 kubelet[3179]: E0307 01:16:19.295639 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.296386 kubelet[3179]: E0307 01:16:19.296091 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.296386 kubelet[3179]: W0307 01:16:19.296284 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.296386 kubelet[3179]: E0307 01:16:19.296318 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.297046 kubelet[3179]: E0307 01:16:19.296942 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.297046 kubelet[3179]: W0307 01:16:19.296956 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.297046 kubelet[3179]: E0307 01:16:19.296970 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.299316 kubelet[3179]: E0307 01:16:19.299222 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.299479 kubelet[3179]: W0307 01:16:19.299382 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.299479 kubelet[3179]: E0307 01:16:19.299409 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.301658 kubelet[3179]: E0307 01:16:19.301483 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.301658 kubelet[3179]: W0307 01:16:19.301500 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.301658 kubelet[3179]: E0307 01:16:19.301611 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.302355 kubelet[3179]: E0307 01:16:19.302138 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.302355 kubelet[3179]: W0307 01:16:19.302154 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.302355 kubelet[3179]: E0307 01:16:19.302170 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.302518 kubelet[3179]: E0307 01:16:19.302462 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.302518 kubelet[3179]: W0307 01:16:19.302474 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.302518 kubelet[3179]: E0307 01:16:19.302489 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.303530 kubelet[3179]: E0307 01:16:19.302691 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.303530 kubelet[3179]: W0307 01:16:19.302704 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.303530 kubelet[3179]: E0307 01:16:19.302716 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.303530 kubelet[3179]: E0307 01:16:19.302978 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.303530 kubelet[3179]: W0307 01:16:19.302990 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.303530 kubelet[3179]: E0307 01:16:19.303002 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.303530 kubelet[3179]: E0307 01:16:19.303360 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.303530 kubelet[3179]: W0307 01:16:19.303372 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.303530 kubelet[3179]: E0307 01:16:19.303387 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.307073 kubelet[3179]: E0307 01:16:19.306667 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.307073 kubelet[3179]: W0307 01:16:19.306842 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.307073 kubelet[3179]: E0307 01:16:19.306861 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.307730 kubelet[3179]: E0307 01:16:19.307313 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.307730 kubelet[3179]: W0307 01:16:19.307327 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.307730 kubelet[3179]: E0307 01:16:19.307657 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.309781 kubelet[3179]: E0307 01:16:19.309469 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.309781 kubelet[3179]: W0307 01:16:19.309500 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.309781 kubelet[3179]: E0307 01:16:19.309515 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.311277 kubelet[3179]: E0307 01:16:19.310743 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.311277 kubelet[3179]: W0307 01:16:19.310773 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.311277 kubelet[3179]: E0307 01:16:19.310787 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.312384 kubelet[3179]: E0307 01:16:19.312177 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.312384 kubelet[3179]: W0307 01:16:19.312193 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.312384 kubelet[3179]: E0307 01:16:19.312223 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.312556 kubelet[3179]: E0307 01:16:19.312484 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.312556 kubelet[3179]: W0307 01:16:19.312497 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.312556 kubelet[3179]: E0307 01:16:19.312511 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:19.312989 kubelet[3179]: E0307 01:16:19.312957 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:19.312989 kubelet[3179]: W0307 01:16:19.312977 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:19.312989 kubelet[3179]: E0307 01:16:19.312991 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:20.132460 kubelet[3179]: E0307 01:16:20.132400 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gkvkn" podUID="28e408df-d765-4f81-83a6-862639ee589c" Mar 7 01:16:20.231300 kubelet[3179]: I0307 01:16:20.230898 3179 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:16:20.287021 kubelet[3179]: E0307 01:16:20.286960 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:20.287021 kubelet[3179]: W0307 01:16:20.287013 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:20.287879 kubelet[3179]: E0307 01:16:20.287041 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:20.287879 kubelet[3179]: E0307 01:16:20.287304 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:20.287879 kubelet[3179]: W0307 01:16:20.287319 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:20.287879 kubelet[3179]: E0307 01:16:20.287338 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:20.287879 kubelet[3179]: E0307 01:16:20.287871 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:20.288121 kubelet[3179]: W0307 01:16:20.287887 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:20.288121 kubelet[3179]: E0307 01:16:20.287906 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:20.288992 kubelet[3179]: E0307 01:16:20.288613 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:20.288992 kubelet[3179]: W0307 01:16:20.288642 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:20.288992 kubelet[3179]: E0307 01:16:20.288659 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:20.289250 kubelet[3179]: E0307 01:16:20.289214 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:20.289250 kubelet[3179]: W0307 01:16:20.289228 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:20.289250 kubelet[3179]: E0307 01:16:20.289243 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:20.289681 kubelet[3179]: E0307 01:16:20.289572 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:20.289681 kubelet[3179]: W0307 01:16:20.289585 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:20.289681 kubelet[3179]: E0307 01:16:20.289630 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:20.290232 kubelet[3179]: E0307 01:16:20.290134 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:20.290232 kubelet[3179]: W0307 01:16:20.290148 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:20.290232 kubelet[3179]: E0307 01:16:20.290161 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:20.290809 kubelet[3179]: E0307 01:16:20.290594 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:20.290809 kubelet[3179]: W0307 01:16:20.290607 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:20.290809 kubelet[3179]: E0307 01:16:20.290621 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:20.291263 kubelet[3179]: E0307 01:16:20.291033 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:20.291263 kubelet[3179]: W0307 01:16:20.291047 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:20.291263 kubelet[3179]: E0307 01:16:20.291068 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:20.291713 kubelet[3179]: E0307 01:16:20.291483 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:20.291713 kubelet[3179]: W0307 01:16:20.291495 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:20.291713 kubelet[3179]: E0307 01:16:20.291516 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:20.292142 kubelet[3179]: E0307 01:16:20.291941 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:20.292142 kubelet[3179]: W0307 01:16:20.291952 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:20.292142 kubelet[3179]: E0307 01:16:20.291965 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:20.292603 kubelet[3179]: E0307 01:16:20.292453 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:20.292603 kubelet[3179]: W0307 01:16:20.292466 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:20.292603 kubelet[3179]: E0307 01:16:20.292490 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:20.293329 kubelet[3179]: E0307 01:16:20.293002 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:20.293329 kubelet[3179]: W0307 01:16:20.293017 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:20.293329 kubelet[3179]: E0307 01:16:20.293030 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:20.293329 kubelet[3179]: E0307 01:16:20.293255 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:20.293329 kubelet[3179]: W0307 01:16:20.293265 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:20.293329 kubelet[3179]: E0307 01:16:20.293282 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:20.293962 kubelet[3179]: E0307 01:16:20.293852 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:20.293962 kubelet[3179]: W0307 01:16:20.293874 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:20.293962 kubelet[3179]: E0307 01:16:20.293889 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:20.311963 kubelet[3179]: E0307 01:16:20.311314 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:20.311963 kubelet[3179]: W0307 01:16:20.311337 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:20.311963 kubelet[3179]: E0307 01:16:20.311360 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:20.311963 kubelet[3179]: E0307 01:16:20.311666 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:20.311963 kubelet[3179]: W0307 01:16:20.311678 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:20.311963 kubelet[3179]: E0307 01:16:20.311692 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:20.312482 kubelet[3179]: E0307 01:16:20.312165 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:20.312482 kubelet[3179]: W0307 01:16:20.312179 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:20.312482 kubelet[3179]: E0307 01:16:20.312195 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:20.313204 kubelet[3179]: E0307 01:16:20.312917 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:20.313204 kubelet[3179]: W0307 01:16:20.312932 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:20.313204 kubelet[3179]: E0307 01:16:20.312946 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:20.313565 kubelet[3179]: E0307 01:16:20.313407 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:20.313565 kubelet[3179]: W0307 01:16:20.313421 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:20.313565 kubelet[3179]: E0307 01:16:20.313434 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:20.314012 kubelet[3179]: E0307 01:16:20.313889 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:20.314012 kubelet[3179]: W0307 01:16:20.313903 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:20.314012 kubelet[3179]: E0307 01:16:20.313916 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:20.314500 kubelet[3179]: E0307 01:16:20.314371 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:20.314500 kubelet[3179]: W0307 01:16:20.314386 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:20.314500 kubelet[3179]: E0307 01:16:20.314398 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:20.314953 kubelet[3179]: E0307 01:16:20.314807 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:20.314953 kubelet[3179]: W0307 01:16:20.314821 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:20.314953 kubelet[3179]: E0307 01:16:20.314834 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:20.315639 kubelet[3179]: E0307 01:16:20.315307 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:20.315639 kubelet[3179]: W0307 01:16:20.315322 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:20.315639 kubelet[3179]: E0307 01:16:20.315337 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:20.315973 kubelet[3179]: E0307 01:16:20.315959 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:20.316169 kubelet[3179]: W0307 01:16:20.316042 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:20.316169 kubelet[3179]: E0307 01:16:20.316062 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:20.316536 kubelet[3179]: E0307 01:16:20.316433 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:20.316536 kubelet[3179]: W0307 01:16:20.316447 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:20.316536 kubelet[3179]: E0307 01:16:20.316460 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:20.317045 kubelet[3179]: E0307 01:16:20.316888 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:20.317045 kubelet[3179]: W0307 01:16:20.316903 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:20.317045 kubelet[3179]: E0307 01:16:20.316917 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:20.317567 kubelet[3179]: E0307 01:16:20.317358 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:20.317567 kubelet[3179]: W0307 01:16:20.317377 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:20.317567 kubelet[3179]: E0307 01:16:20.317392 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:20.318125 kubelet[3179]: E0307 01:16:20.317828 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:20.318125 kubelet[3179]: W0307 01:16:20.317842 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:20.318125 kubelet[3179]: E0307 01:16:20.317856 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:20.318587 kubelet[3179]: E0307 01:16:20.318456 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:20.318587 kubelet[3179]: W0307 01:16:20.318469 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:20.318587 kubelet[3179]: E0307 01:16:20.318482 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:20.319796 kubelet[3179]: E0307 01:16:20.319514 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:20.319796 kubelet[3179]: W0307 01:16:20.319530 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:20.319796 kubelet[3179]: E0307 01:16:20.319544 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:20.320158 kubelet[3179]: E0307 01:16:20.320144 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:20.320346 kubelet[3179]: W0307 01:16:20.320243 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:20.320346 kubelet[3179]: E0307 01:16:20.320265 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:20.320687 kubelet[3179]: E0307 01:16:20.320636 3179 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:16:20.320687 kubelet[3179]: W0307 01:16:20.320651 3179 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:16:20.320687 kubelet[3179]: E0307 01:16:20.320665 3179 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:16:20.386916 containerd[1718]: time="2026-03-07T01:16:20.386750406Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:20.390876 containerd[1718]: time="2026-03-07T01:16:20.390815375Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 7 01:16:20.396134 containerd[1718]: time="2026-03-07T01:16:20.395431355Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:20.403673 containerd[1718]: time="2026-03-07T01:16:20.403633396Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:20.405941 containerd[1718]: time="2026-03-07T01:16:20.405902234Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.3452383s" Mar 7 01:16:20.406037 containerd[1718]: time="2026-03-07T01:16:20.405944735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 7 01:16:20.413818 containerd[1718]: time="2026-03-07T01:16:20.413788770Z" level=info msg="CreateContainer within sandbox \"6d7eb0120da77cc9ec5d03da1fffaed501a98ecb29231a0a2d7f0f399d002475\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 7 01:16:20.462051 containerd[1718]: time="2026-03-07T01:16:20.462003298Z" level=info msg="CreateContainer within sandbox \"6d7eb0120da77cc9ec5d03da1fffaed501a98ecb29231a0a2d7f0f399d002475\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"68d6aaf4d08824d526a608cc63e250727c24f83e2b4af4718da8457aa42a4e6d\"" Mar 7 01:16:20.464126 containerd[1718]: time="2026-03-07T01:16:20.462695210Z" level=info msg="StartContainer for \"68d6aaf4d08824d526a608cc63e250727c24f83e2b4af4718da8457aa42a4e6d\"" Mar 7 01:16:20.497093 systemd[1]: run-containerd-runc-k8s.io-68d6aaf4d08824d526a608cc63e250727c24f83e2b4af4718da8457aa42a4e6d-runc.geYO7p.mount: Deactivated successfully. Mar 7 01:16:20.504455 systemd[1]: Started cri-containerd-68d6aaf4d08824d526a608cc63e250727c24f83e2b4af4718da8457aa42a4e6d.scope - libcontainer container 68d6aaf4d08824d526a608cc63e250727c24f83e2b4af4718da8457aa42a4e6d. Mar 7 01:16:20.538577 containerd[1718]: time="2026-03-07T01:16:20.538493311Z" level=info msg="StartContainer for \"68d6aaf4d08824d526a608cc63e250727c24f83e2b4af4718da8457aa42a4e6d\" returns successfully" Mar 7 01:16:20.545685 systemd[1]: cri-containerd-68d6aaf4d08824d526a608cc63e250727c24f83e2b4af4718da8457aa42a4e6d.scope: Deactivated successfully. Mar 7 01:16:20.574602 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68d6aaf4d08824d526a608cc63e250727c24f83e2b4af4718da8457aa42a4e6d-rootfs.mount: Deactivated successfully. Mar 7 01:16:21.744633 kubelet[3179]: I0307 01:16:21.256146 3179 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5c949ff996-wrscl" podStartSLOduration=2.753020352 podStartE2EDuration="5.256125134s" podCreationTimestamp="2026-03-07 01:16:16 +0000 UTC" firstStartedPulling="2026-03-07 01:16:16.557269647 +0000 UTC m=+21.522439680" lastFinishedPulling="2026-03-07 01:16:19.060374429 +0000 UTC m=+24.025544462" observedRunningTime="2026-03-07 01:16:19.245706712 +0000 UTC m=+24.210876745" watchObservedRunningTime="2026-03-07 01:16:21.256125134 +0000 UTC m=+26.221295267" Mar 7 01:16:22.133836 kubelet[3179]: E0307 01:16:22.133480 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gkvkn" podUID="28e408df-d765-4f81-83a6-862639ee589c" Mar 7 01:16:22.547068 containerd[1718]: time="2026-03-07T01:16:22.546986801Z" level=info msg="shim disconnected" id=68d6aaf4d08824d526a608cc63e250727c24f83e2b4af4718da8457aa42a4e6d namespace=k8s.io Mar 7 01:16:22.547068 containerd[1718]: time="2026-03-07T01:16:22.547059002Z" level=warning msg="cleaning up after shim disconnected" id=68d6aaf4d08824d526a608cc63e250727c24f83e2b4af4718da8457aa42a4e6d namespace=k8s.io Mar 7 01:16:22.547068 containerd[1718]: time="2026-03-07T01:16:22.547070002Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:16:23.241919 containerd[1718]: time="2026-03-07T01:16:23.241448526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 7 01:16:24.132322 kubelet[3179]: E0307 01:16:24.132263 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gkvkn" podUID="28e408df-d765-4f81-83a6-862639ee589c" Mar 7 01:16:26.132732 kubelet[3179]: E0307 01:16:26.132682 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gkvkn" podUID="28e408df-d765-4f81-83a6-862639ee589c" Mar 7 01:16:28.133020 kubelet[3179]: E0307 01:16:28.132795 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gkvkn" podUID="28e408df-d765-4f81-83a6-862639ee589c" Mar 7 01:16:29.007204 kubelet[3179]: I0307 01:16:29.007158 3179 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:16:30.133125 kubelet[3179]: E0307 01:16:30.132738 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gkvkn" podUID="28e408df-d765-4f81-83a6-862639ee589c" Mar 7 01:16:31.757796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1434183482.mount: Deactivated successfully. Mar 7 01:16:31.795587 containerd[1718]: time="2026-03-07T01:16:31.795532033Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:31.799025 containerd[1718]: time="2026-03-07T01:16:31.798868886Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 7 01:16:31.802449 containerd[1718]: time="2026-03-07T01:16:31.802391341Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:31.806887 containerd[1718]: time="2026-03-07T01:16:31.806837911Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:31.807590 containerd[1718]: time="2026-03-07T01:16:31.807471920Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 8.565978794s" Mar 7 01:16:31.807590 containerd[1718]: time="2026-03-07T01:16:31.807509721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 7 01:16:31.816965 containerd[1718]: time="2026-03-07T01:16:31.816897768Z" level=info msg="CreateContainer within sandbox \"6d7eb0120da77cc9ec5d03da1fffaed501a98ecb29231a0a2d7f0f399d002475\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 7 01:16:31.899863 containerd[1718]: time="2026-03-07T01:16:31.899808870Z" level=info msg="CreateContainer within sandbox \"6d7eb0120da77cc9ec5d03da1fffaed501a98ecb29231a0a2d7f0f399d002475\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"e631d00e8356e9549e3cc8fc1f79f26a24ab5b26e6d38fea47cb299ca9ad4cf9\"" Mar 7 01:16:31.901825 containerd[1718]: time="2026-03-07T01:16:31.900571582Z" level=info msg="StartContainer for \"e631d00e8356e9549e3cc8fc1f79f26a24ab5b26e6d38fea47cb299ca9ad4cf9\"" Mar 7 01:16:31.946944 systemd[1]: run-containerd-runc-k8s.io-e631d00e8356e9549e3cc8fc1f79f26a24ab5b26e6d38fea47cb299ca9ad4cf9-runc.Lgn0wm.mount: Deactivated successfully. Mar 7 01:16:31.963330 systemd[1]: Started cri-containerd-e631d00e8356e9549e3cc8fc1f79f26a24ab5b26e6d38fea47cb299ca9ad4cf9.scope - libcontainer container e631d00e8356e9549e3cc8fc1f79f26a24ab5b26e6d38fea47cb299ca9ad4cf9. Mar 7 01:16:32.012949 containerd[1718]: time="2026-03-07T01:16:32.012831373Z" level=info msg="StartContainer for \"e631d00e8356e9549e3cc8fc1f79f26a24ab5b26e6d38fea47cb299ca9ad4cf9\" returns successfully" Mar 7 01:16:32.053489 systemd[1]: cri-containerd-e631d00e8356e9549e3cc8fc1f79f26a24ab5b26e6d38fea47cb299ca9ad4cf9.scope: Deactivated successfully. Mar 7 01:16:32.132334 kubelet[3179]: E0307 01:16:32.132263 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gkvkn" podUID="28e408df-d765-4f81-83a6-862639ee589c" Mar 7 01:16:32.757373 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e631d00e8356e9549e3cc8fc1f79f26a24ab5b26e6d38fea47cb299ca9ad4cf9-rootfs.mount: Deactivated successfully. Mar 7 01:16:34.133068 kubelet[3179]: E0307 01:16:34.132993 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gkvkn" podUID="28e408df-d765-4f81-83a6-862639ee589c" Mar 7 01:16:35.506020 containerd[1718]: time="2026-03-07T01:16:35.505941388Z" level=info msg="shim disconnected" id=e631d00e8356e9549e3cc8fc1f79f26a24ab5b26e6d38fea47cb299ca9ad4cf9 namespace=k8s.io Mar 7 01:16:35.506020 containerd[1718]: time="2026-03-07T01:16:35.506013189Z" level=warning msg="cleaning up after shim disconnected" id=e631d00e8356e9549e3cc8fc1f79f26a24ab5b26e6d38fea47cb299ca9ad4cf9 namespace=k8s.io Mar 7 01:16:35.506020 containerd[1718]: time="2026-03-07T01:16:35.506024689Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:16:36.132736 kubelet[3179]: E0307 01:16:36.132679 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gkvkn" podUID="28e408df-d765-4f81-83a6-862639ee589c" Mar 7 01:16:36.271129 containerd[1718]: time="2026-03-07T01:16:36.271073592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 7 01:16:38.132716 kubelet[3179]: E0307 01:16:38.132650 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gkvkn" podUID="28e408df-d765-4f81-83a6-862639ee589c" Mar 7 01:16:40.132991 kubelet[3179]: E0307 01:16:40.132941 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gkvkn" podUID="28e408df-d765-4f81-83a6-862639ee589c" Mar 7 01:16:40.425427 containerd[1718]: time="2026-03-07T01:16:40.425298829Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:40.428126 containerd[1718]: time="2026-03-07T01:16:40.428056074Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 7 01:16:40.432850 containerd[1718]: time="2026-03-07T01:16:40.432787250Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:40.437218 containerd[1718]: time="2026-03-07T01:16:40.437165120Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:40.438162 containerd[1718]: time="2026-03-07T01:16:40.437904432Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 4.16677874s" Mar 7 01:16:40.438162 containerd[1718]: time="2026-03-07T01:16:40.437944233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 7 01:16:40.445357 containerd[1718]: time="2026-03-07T01:16:40.445326252Z" level=info msg="CreateContainer within sandbox \"6d7eb0120da77cc9ec5d03da1fffaed501a98ecb29231a0a2d7f0f399d002475\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 7 01:16:40.485194 containerd[1718]: time="2026-03-07T01:16:40.485144493Z" level=info msg="CreateContainer within sandbox \"6d7eb0120da77cc9ec5d03da1fffaed501a98ecb29231a0a2d7f0f399d002475\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"de37abf9d49b7a84432458c6197e4763393a94cb6491e93dec71795ef8db4b62\"" Mar 7 01:16:40.487899 containerd[1718]: time="2026-03-07T01:16:40.486138809Z" level=info msg="StartContainer for \"de37abf9d49b7a84432458c6197e4763393a94cb6491e93dec71795ef8db4b62\"" Mar 7 01:16:40.518768 systemd[1]: run-containerd-runc-k8s.io-de37abf9d49b7a84432458c6197e4763393a94cb6491e93dec71795ef8db4b62-runc.N1ZLRY.mount: Deactivated successfully. Mar 7 01:16:40.528239 systemd[1]: Started cri-containerd-de37abf9d49b7a84432458c6197e4763393a94cb6491e93dec71795ef8db4b62.scope - libcontainer container de37abf9d49b7a84432458c6197e4763393a94cb6491e93dec71795ef8db4b62. Mar 7 01:16:40.561956 containerd[1718]: time="2026-03-07T01:16:40.561908329Z" level=info msg="StartContainer for \"de37abf9d49b7a84432458c6197e4763393a94cb6491e93dec71795ef8db4b62\" returns successfully" Mar 7 01:16:42.132968 kubelet[3179]: E0307 01:16:42.132906 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gkvkn" podUID="28e408df-d765-4f81-83a6-862639ee589c" Mar 7 01:16:42.367064 containerd[1718]: time="2026-03-07T01:16:42.367015188Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 01:16:42.370620 systemd[1]: cri-containerd-de37abf9d49b7a84432458c6197e4763393a94cb6491e93dec71795ef8db4b62.scope: Deactivated successfully. Mar 7 01:16:42.393439 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de37abf9d49b7a84432458c6197e4763393a94cb6491e93dec71795ef8db4b62-rootfs.mount: Deactivated successfully. Mar 7 01:16:42.412719 kubelet[3179]: I0307 01:16:42.412686 3179 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 7 01:16:43.709990 systemd[1]: Created slice kubepods-burstable-poda2c57fb9_f299_436c_8599_34cdb3343b9a.slice - libcontainer container kubepods-burstable-poda2c57fb9_f299_436c_8599_34cdb3343b9a.slice. Mar 7 01:16:43.712126 containerd[1718]: time="2026-03-07T01:16:43.711729835Z" level=info msg="shim disconnected" id=de37abf9d49b7a84432458c6197e4763393a94cb6491e93dec71795ef8db4b62 namespace=k8s.io Mar 7 01:16:43.712126 containerd[1718]: time="2026-03-07T01:16:43.711805636Z" level=warning msg="cleaning up after shim disconnected" id=de37abf9d49b7a84432458c6197e4763393a94cb6491e93dec71795ef8db4b62 namespace=k8s.io Mar 7 01:16:43.712126 containerd[1718]: time="2026-03-07T01:16:43.711819337Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:16:43.727045 systemd[1]: Created slice kubepods-besteffort-pod7e3699ed_bb54_40b3_937a_980b33b5dfb7.slice - libcontainer container kubepods-besteffort-pod7e3699ed_bb54_40b3_937a_980b33b5dfb7.slice. Mar 7 01:16:43.745226 systemd[1]: Created slice kubepods-burstable-pod99a410ce_35f7_4fd4_9422_35a8b99f1549.slice - libcontainer container kubepods-burstable-pod99a410ce_35f7_4fd4_9422_35a8b99f1549.slice. Mar 7 01:16:43.758515 systemd[1]: Created slice kubepods-besteffort-pod28e408df_d765_4f81_83a6_862639ee589c.slice - libcontainer container kubepods-besteffort-pod28e408df_d765_4f81_83a6_862639ee589c.slice. Mar 7 01:16:43.773565 systemd[1]: Created slice kubepods-besteffort-pode6dff6a9_2fcc_4331_a1f3_940a7614c300.slice - libcontainer container kubepods-besteffort-pode6dff6a9_2fcc_4331_a1f3_940a7614c300.slice. Mar 7 01:16:43.774728 containerd[1718]: time="2026-03-07T01:16:43.774287342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gkvkn,Uid:28e408df-d765-4f81-83a6-862639ee589c,Namespace:calico-system,Attempt:0,}" Mar 7 01:16:43.776550 kubelet[3179]: I0307 01:16:43.776465 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cplj2\" (UniqueName: \"kubernetes.io/projected/07474d87-ed17-46c3-afc1-e14a04645e92-kube-api-access-cplj2\") pod \"whisker-6c4858b6b8-t58mf\" (UID: \"07474d87-ed17-46c3-afc1-e14a04645e92\") " pod="calico-system/whisker-6c4858b6b8-t58mf" Mar 7 01:16:43.776550 kubelet[3179]: I0307 01:16:43.776508 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc9g7\" (UniqueName: \"kubernetes.io/projected/0fb06cd2-739c-4484-8717-b361c9a762dd-kube-api-access-sc9g7\") pod \"calico-kube-controllers-79dffb94bc-tv6jk\" (UID: \"0fb06cd2-739c-4484-8717-b361c9a762dd\") " pod="calico-system/calico-kube-controllers-79dffb94bc-tv6jk" Mar 7 01:16:43.776550 kubelet[3179]: I0307 01:16:43.776535 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7e3699ed-bb54-40b3-937a-980b33b5dfb7-calico-apiserver-certs\") pod \"calico-apiserver-5c79b8d867-vk5ds\" (UID: \"7e3699ed-bb54-40b3-937a-980b33b5dfb7\") " pod="calico-system/calico-apiserver-5c79b8d867-vk5ds" Mar 7 01:16:43.777906 kubelet[3179]: I0307 01:16:43.776562 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml22b\" (UniqueName: \"kubernetes.io/projected/7e3699ed-bb54-40b3-937a-980b33b5dfb7-kube-api-access-ml22b\") pod \"calico-apiserver-5c79b8d867-vk5ds\" (UID: \"7e3699ed-bb54-40b3-937a-980b33b5dfb7\") " pod="calico-system/calico-apiserver-5c79b8d867-vk5ds" Mar 7 01:16:43.777906 kubelet[3179]: I0307 01:16:43.776581 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/49b855bf-45ba-449a-a595-ffe6b81c323a-calico-apiserver-certs\") pod \"calico-apiserver-5c79b8d867-qdz59\" (UID: \"49b855bf-45ba-449a-a595-ffe6b81c323a\") " pod="calico-system/calico-apiserver-5c79b8d867-qdz59" Mar 7 01:16:43.777906 kubelet[3179]: I0307 01:16:43.776623 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj4p8\" (UniqueName: \"kubernetes.io/projected/49b855bf-45ba-449a-a595-ffe6b81c323a-kube-api-access-wj4p8\") pod \"calico-apiserver-5c79b8d867-qdz59\" (UID: \"49b855bf-45ba-449a-a595-ffe6b81c323a\") " pod="calico-system/calico-apiserver-5c79b8d867-qdz59" Mar 7 01:16:43.777906 kubelet[3179]: I0307 01:16:43.776647 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2c57fb9-f299-436c-8599-34cdb3343b9a-config-volume\") pod \"coredns-66bc5c9577-8rgcl\" (UID: \"a2c57fb9-f299-436c-8599-34cdb3343b9a\") " pod="kube-system/coredns-66bc5c9577-8rgcl" Mar 7 01:16:43.777906 kubelet[3179]: I0307 01:16:43.776674 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/07474d87-ed17-46c3-afc1-e14a04645e92-whisker-backend-key-pair\") pod \"whisker-6c4858b6b8-t58mf\" (UID: \"07474d87-ed17-46c3-afc1-e14a04645e92\") " pod="calico-system/whisker-6c4858b6b8-t58mf" Mar 7 01:16:43.778167 kubelet[3179]: I0307 01:16:43.776701 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/07474d87-ed17-46c3-afc1-e14a04645e92-nginx-config\") pod \"whisker-6c4858b6b8-t58mf\" (UID: \"07474d87-ed17-46c3-afc1-e14a04645e92\") " pod="calico-system/whisker-6c4858b6b8-t58mf" Mar 7 01:16:43.778167 kubelet[3179]: I0307 01:16:43.776723 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lm2k\" (UniqueName: \"kubernetes.io/projected/a2c57fb9-f299-436c-8599-34cdb3343b9a-kube-api-access-9lm2k\") pod \"coredns-66bc5c9577-8rgcl\" (UID: \"a2c57fb9-f299-436c-8599-34cdb3343b9a\") " pod="kube-system/coredns-66bc5c9577-8rgcl" Mar 7 01:16:43.778167 kubelet[3179]: I0307 01:16:43.776754 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/99a410ce-35f7-4fd4-9422-35a8b99f1549-config-volume\") pod \"coredns-66bc5c9577-fv9qs\" (UID: \"99a410ce-35f7-4fd4-9422-35a8b99f1549\") " pod="kube-system/coredns-66bc5c9577-fv9qs" Mar 7 01:16:43.778167 kubelet[3179]: I0307 01:16:43.776778 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg629\" (UniqueName: \"kubernetes.io/projected/99a410ce-35f7-4fd4-9422-35a8b99f1549-kube-api-access-tg629\") pod \"coredns-66bc5c9577-fv9qs\" (UID: \"99a410ce-35f7-4fd4-9422-35a8b99f1549\") " pod="kube-system/coredns-66bc5c9577-fv9qs" Mar 7 01:16:43.778167 kubelet[3179]: I0307 01:16:43.776802 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07474d87-ed17-46c3-afc1-e14a04645e92-whisker-ca-bundle\") pod \"whisker-6c4858b6b8-t58mf\" (UID: \"07474d87-ed17-46c3-afc1-e14a04645e92\") " pod="calico-system/whisker-6c4858b6b8-t58mf" Mar 7 01:16:43.779350 kubelet[3179]: I0307 01:16:43.776823 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0fb06cd2-739c-4484-8717-b361c9a762dd-tigera-ca-bundle\") pod \"calico-kube-controllers-79dffb94bc-tv6jk\" (UID: \"0fb06cd2-739c-4484-8717-b361c9a762dd\") " pod="calico-system/calico-kube-controllers-79dffb94bc-tv6jk" Mar 7 01:16:43.787501 systemd[1]: Created slice kubepods-besteffort-pod0fb06cd2_739c_4484_8717_b361c9a762dd.slice - libcontainer container kubepods-besteffort-pod0fb06cd2_739c_4484_8717_b361c9a762dd.slice. Mar 7 01:16:43.795232 systemd[1]: Created slice kubepods-besteffort-pod49b855bf_45ba_449a_a595_ffe6b81c323a.slice - libcontainer container kubepods-besteffort-pod49b855bf_45ba_449a_a595_ffe6b81c323a.slice. Mar 7 01:16:43.801466 systemd[1]: Created slice kubepods-besteffort-pod07474d87_ed17_46c3_afc1_e14a04645e92.slice - libcontainer container kubepods-besteffort-pod07474d87_ed17_46c3_afc1_e14a04645e92.slice. Mar 7 01:16:43.878972 kubelet[3179]: I0307 01:16:43.878130 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e6dff6a9-2fcc-4331-a1f3-940a7614c300-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-4f4st\" (UID: \"e6dff6a9-2fcc-4331-a1f3-940a7614c300\") " pod="calico-system/goldmane-cccfbd5cf-4f4st" Mar 7 01:16:43.878972 kubelet[3179]: I0307 01:16:43.878246 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6dff6a9-2fcc-4331-a1f3-940a7614c300-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-4f4st\" (UID: \"e6dff6a9-2fcc-4331-a1f3-940a7614c300\") " pod="calico-system/goldmane-cccfbd5cf-4f4st" Mar 7 01:16:43.881340 kubelet[3179]: I0307 01:16:43.881297 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6dff6a9-2fcc-4331-a1f3-940a7614c300-config\") pod \"goldmane-cccfbd5cf-4f4st\" (UID: \"e6dff6a9-2fcc-4331-a1f3-940a7614c300\") " pod="calico-system/goldmane-cccfbd5cf-4f4st" Mar 7 01:16:43.881510 kubelet[3179]: I0307 01:16:43.881488 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mf8r\" (UniqueName: \"kubernetes.io/projected/e6dff6a9-2fcc-4331-a1f3-940a7614c300-kube-api-access-4mf8r\") pod \"goldmane-cccfbd5cf-4f4st\" (UID: \"e6dff6a9-2fcc-4331-a1f3-940a7614c300\") " pod="calico-system/goldmane-cccfbd5cf-4f4st" Mar 7 01:16:43.977174 containerd[1718]: time="2026-03-07T01:16:43.977128108Z" level=error msg="Failed to destroy network for sandbox \"2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:43.977543 containerd[1718]: time="2026-03-07T01:16:43.977503214Z" level=error msg="encountered an error cleaning up failed sandbox \"2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:43.977653 containerd[1718]: time="2026-03-07T01:16:43.977576115Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gkvkn,Uid:28e408df-d765-4f81-83a6-862639ee589c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:43.977915 kubelet[3179]: E0307 01:16:43.977877 3179 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:43.978018 kubelet[3179]: E0307 01:16:43.977962 3179 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gkvkn" Mar 7 01:16:43.978018 kubelet[3179]: E0307 01:16:43.977991 3179 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gkvkn" Mar 7 01:16:43.978124 kubelet[3179]: E0307 01:16:43.978066 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gkvkn_calico-system(28e408df-d765-4f81-83a6-862639ee589c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gkvkn_calico-system(28e408df-d765-4f81-83a6-862639ee589c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gkvkn" podUID="28e408df-d765-4f81-83a6-862639ee589c" Mar 7 01:16:44.025307 containerd[1718]: time="2026-03-07T01:16:44.025209082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8rgcl,Uid:a2c57fb9-f299-436c-8599-34cdb3343b9a,Namespace:kube-system,Attempt:0,}" Mar 7 01:16:44.042123 containerd[1718]: time="2026-03-07T01:16:44.042058653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c79b8d867-vk5ds,Uid:7e3699ed-bb54-40b3-937a-980b33b5dfb7,Namespace:calico-system,Attempt:0,}" Mar 7 01:16:44.057766 containerd[1718]: time="2026-03-07T01:16:44.057721705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fv9qs,Uid:99a410ce-35f7-4fd4-9422-35a8b99f1549,Namespace:kube-system,Attempt:0,}" Mar 7 01:16:44.090371 containerd[1718]: time="2026-03-07T01:16:44.090323330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-4f4st,Uid:e6dff6a9-2fcc-4331-a1f3-940a7614c300,Namespace:calico-system,Attempt:0,}" Mar 7 01:16:44.143132 containerd[1718]: time="2026-03-07T01:16:44.142843475Z" level=error msg="Failed to destroy network for sandbox \"4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:44.143592 containerd[1718]: time="2026-03-07T01:16:44.143412384Z" level=error msg="encountered an error cleaning up failed sandbox \"4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:44.143592 containerd[1718]: time="2026-03-07T01:16:44.143483686Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8rgcl,Uid:a2c57fb9-f299-436c-8599-34cdb3343b9a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:44.144121 kubelet[3179]: E0307 01:16:44.143899 3179 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:44.144121 kubelet[3179]: E0307 01:16:44.143989 3179 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-8rgcl" Mar 7 01:16:44.144121 kubelet[3179]: E0307 01:16:44.144016 3179 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-8rgcl" Mar 7 01:16:44.144887 kubelet[3179]: E0307 01:16:44.144818 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-8rgcl_kube-system(a2c57fb9-f299-436c-8599-34cdb3343b9a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-8rgcl_kube-system(a2c57fb9-f299-436c-8599-34cdb3343b9a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-8rgcl" podUID="a2c57fb9-f299-436c-8599-34cdb3343b9a" Mar 7 01:16:44.163838 containerd[1718]: time="2026-03-07T01:16:44.163773212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c4858b6b8-t58mf,Uid:07474d87-ed17-46c3-afc1-e14a04645e92,Namespace:calico-system,Attempt:0,}" Mar 7 01:16:44.169751 containerd[1718]: time="2026-03-07T01:16:44.168853594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c79b8d867-qdz59,Uid:49b855bf-45ba-449a-a595-ffe6b81c323a,Namespace:calico-system,Attempt:0,}" Mar 7 01:16:44.171169 containerd[1718]: time="2026-03-07T01:16:44.171132731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79dffb94bc-tv6jk,Uid:0fb06cd2-739c-4484-8717-b361c9a762dd,Namespace:calico-system,Attempt:0,}" Mar 7 01:16:44.211270 containerd[1718]: time="2026-03-07T01:16:44.211212576Z" level=error msg="Failed to destroy network for sandbox \"21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:44.211688 containerd[1718]: time="2026-03-07T01:16:44.211597282Z" level=error msg="encountered an error cleaning up failed sandbox \"21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:44.211688 containerd[1718]: time="2026-03-07T01:16:44.211670483Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c79b8d867-vk5ds,Uid:7e3699ed-bb54-40b3-937a-980b33b5dfb7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:44.211985 kubelet[3179]: E0307 01:16:44.211932 3179 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:44.212729 kubelet[3179]: E0307 01:16:44.212003 3179 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5c79b8d867-vk5ds" Mar 7 01:16:44.212729 kubelet[3179]: E0307 01:16:44.212032 3179 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5c79b8d867-vk5ds" Mar 7 01:16:44.212729 kubelet[3179]: E0307 01:16:44.212147 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c79b8d867-vk5ds_calico-system(7e3699ed-bb54-40b3-937a-980b33b5dfb7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c79b8d867-vk5ds_calico-system(7e3699ed-bb54-40b3-937a-980b33b5dfb7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5c79b8d867-vk5ds" podUID="7e3699ed-bb54-40b3-937a-980b33b5dfb7" Mar 7 01:16:44.296746 kubelet[3179]: I0307 01:16:44.296094 3179 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0" Mar 7 01:16:44.299912 containerd[1718]: time="2026-03-07T01:16:44.299870203Z" level=info msg="StopPodSandbox for \"2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0\"" Mar 7 01:16:44.301074 containerd[1718]: time="2026-03-07T01:16:44.301035022Z" level=info msg="Ensure that sandbox 2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0 in task-service has been cleanup successfully" Mar 7 01:16:44.307024 containerd[1718]: time="2026-03-07T01:16:44.306990518Z" level=error msg="Failed to destroy network for sandbox \"3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:44.307650 containerd[1718]: time="2026-03-07T01:16:44.307584327Z" level=error msg="encountered an error cleaning up failed sandbox \"3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:44.307844 containerd[1718]: time="2026-03-07T01:16:44.307811931Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fv9qs,Uid:99a410ce-35f7-4fd4-9422-35a8b99f1549,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:44.309164 kubelet[3179]: E0307 01:16:44.309124 3179 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:44.309388 kubelet[3179]: E0307 01:16:44.309186 3179 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-fv9qs" Mar 7 01:16:44.309388 kubelet[3179]: E0307 01:16:44.309212 3179 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-fv9qs" Mar 7 01:16:44.309388 kubelet[3179]: E0307 01:16:44.309273 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-fv9qs_kube-system(99a410ce-35f7-4fd4-9422-35a8b99f1549)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-fv9qs_kube-system(99a410ce-35f7-4fd4-9422-35a8b99f1549)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-fv9qs" podUID="99a410ce-35f7-4fd4-9422-35a8b99f1549" Mar 7 01:16:44.313606 kubelet[3179]: I0307 01:16:44.313318 3179 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae" Mar 7 01:16:44.315336 containerd[1718]: time="2026-03-07T01:16:44.315306452Z" level=info msg="StopPodSandbox for \"4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae\"" Mar 7 01:16:44.316046 containerd[1718]: time="2026-03-07T01:16:44.315920361Z" level=info msg="Ensure that sandbox 4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae in task-service has been cleanup successfully" Mar 7 01:16:44.321480 kubelet[3179]: I0307 01:16:44.321450 3179 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b" Mar 7 01:16:44.333128 containerd[1718]: time="2026-03-07T01:16:44.329242076Z" level=info msg="StopPodSandbox for \"21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b\"" Mar 7 01:16:44.333128 containerd[1718]: time="2026-03-07T01:16:44.329450579Z" level=info msg="Ensure that sandbox 21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b in task-service has been cleanup successfully" Mar 7 01:16:44.339286 containerd[1718]: time="2026-03-07T01:16:44.339226637Z" level=error msg="Failed to destroy network for sandbox \"3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:44.343888 containerd[1718]: time="2026-03-07T01:16:44.343842811Z" level=error msg="encountered an error cleaning up failed sandbox \"3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:44.345196 containerd[1718]: time="2026-03-07T01:16:44.345027530Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-4f4st,Uid:e6dff6a9-2fcc-4331-a1f3-940a7614c300,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:44.347445 kubelet[3179]: E0307 01:16:44.346696 3179 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:44.347445 kubelet[3179]: E0307 01:16:44.347231 3179 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-4f4st" Mar 7 01:16:44.347445 kubelet[3179]: E0307 01:16:44.347259 3179 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-4f4st" Mar 7 01:16:44.347642 kubelet[3179]: E0307 01:16:44.347326 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-4f4st_calico-system(e6dff6a9-2fcc-4331-a1f3-940a7614c300)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-4f4st_calico-system(e6dff6a9-2fcc-4331-a1f3-940a7614c300)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-4f4st" podUID="e6dff6a9-2fcc-4331-a1f3-940a7614c300" Mar 7 01:16:44.355154 containerd[1718]: time="2026-03-07T01:16:44.354729486Z" level=info msg="CreateContainer within sandbox \"6d7eb0120da77cc9ec5d03da1fffaed501a98ecb29231a0a2d7f0f399d002475\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 7 01:16:44.403220 containerd[1718]: time="2026-03-07T01:16:44.403023064Z" level=error msg="StopPodSandbox for \"2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0\" failed" error="failed to destroy network for sandbox \"2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:44.403916 kubelet[3179]: E0307 01:16:44.403406 3179 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0" Mar 7 01:16:44.403916 kubelet[3179]: E0307 01:16:44.403551 3179 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0"} Mar 7 01:16:44.403916 kubelet[3179]: E0307 01:16:44.403626 3179 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"28e408df-d765-4f81-83a6-862639ee589c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 01:16:44.403916 kubelet[3179]: E0307 01:16:44.403661 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"28e408df-d765-4f81-83a6-862639ee589c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gkvkn" podUID="28e408df-d765-4f81-83a6-862639ee589c" Mar 7 01:16:44.441678 containerd[1718]: time="2026-03-07T01:16:44.440296964Z" level=info msg="CreateContainer within sandbox \"6d7eb0120da77cc9ec5d03da1fffaed501a98ecb29231a0a2d7f0f399d002475\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"92fe672a76044d4d414821e5ba01d4e35498713b867fd60c0bbd37b259397d51\"" Mar 7 01:16:44.443005 containerd[1718]: time="2026-03-07T01:16:44.442850105Z" level=info msg="StartContainer for \"92fe672a76044d4d414821e5ba01d4e35498713b867fd60c0bbd37b259397d51\"" Mar 7 01:16:44.460288 containerd[1718]: time="2026-03-07T01:16:44.460232485Z" level=error msg="Failed to destroy network for sandbox \"f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:44.462669 containerd[1718]: time="2026-03-07T01:16:44.462633323Z" level=error msg="encountered an error cleaning up failed sandbox \"f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:44.465153 containerd[1718]: time="2026-03-07T01:16:44.464151048Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c4858b6b8-t58mf,Uid:07474d87-ed17-46c3-afc1-e14a04645e92,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:44.465935 kubelet[3179]: E0307 01:16:44.465415 3179 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:44.465935 kubelet[3179]: E0307 01:16:44.465477 3179 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c4858b6b8-t58mf" Mar 7 01:16:44.465935 kubelet[3179]: E0307 01:16:44.465503 3179 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c4858b6b8-t58mf" Mar 7 01:16:44.466174 kubelet[3179]: E0307 01:16:44.465588 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6c4858b6b8-t58mf_calico-system(07474d87-ed17-46c3-afc1-e14a04645e92)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6c4858b6b8-t58mf_calico-system(07474d87-ed17-46c3-afc1-e14a04645e92)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6c4858b6b8-t58mf" podUID="07474d87-ed17-46c3-afc1-e14a04645e92" Mar 7 01:16:44.494699 containerd[1718]: time="2026-03-07T01:16:44.494635838Z" level=error msg="StopPodSandbox for \"4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae\" failed" error="failed to destroy network for sandbox \"4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:44.494989 kubelet[3179]: E0307 01:16:44.494899 3179 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae" Mar 7 01:16:44.494989 kubelet[3179]: E0307 01:16:44.494960 3179 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae"} Mar 7 01:16:44.495225 kubelet[3179]: E0307 01:16:44.495003 3179 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a2c57fb9-f299-436c-8599-34cdb3343b9a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 01:16:44.495225 kubelet[3179]: E0307 01:16:44.495038 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a2c57fb9-f299-436c-8599-34cdb3343b9a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-8rgcl" podUID="a2c57fb9-f299-436c-8599-34cdb3343b9a" Mar 7 01:16:44.513046 containerd[1718]: time="2026-03-07T01:16:44.512686929Z" level=error msg="StopPodSandbox for \"21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b\" failed" error="failed to destroy network for sandbox \"21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:44.513722 kubelet[3179]: E0307 01:16:44.513534 3179 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b" Mar 7 01:16:44.513722 kubelet[3179]: E0307 01:16:44.513592 3179 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b"} Mar 7 01:16:44.513722 kubelet[3179]: E0307 01:16:44.513642 3179 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7e3699ed-bb54-40b3-937a-980b33b5dfb7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 01:16:44.513722 kubelet[3179]: E0307 01:16:44.513678 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7e3699ed-bb54-40b3-937a-980b33b5dfb7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5c79b8d867-vk5ds" podUID="7e3699ed-bb54-40b3-937a-980b33b5dfb7" Mar 7 01:16:44.523322 systemd[1]: Started cri-containerd-92fe672a76044d4d414821e5ba01d4e35498713b867fd60c0bbd37b259397d51.scope - libcontainer container 92fe672a76044d4d414821e5ba01d4e35498713b867fd60c0bbd37b259397d51. Mar 7 01:16:44.540299 containerd[1718]: time="2026-03-07T01:16:44.540234673Z" level=error msg="Failed to destroy network for sandbox \"da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:44.541181 containerd[1718]: time="2026-03-07T01:16:44.541130887Z" level=error msg="encountered an error cleaning up failed sandbox \"da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:44.541347 containerd[1718]: time="2026-03-07T01:16:44.541305190Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79dffb94bc-tv6jk,Uid:0fb06cd2-739c-4484-8717-b361c9a762dd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:44.541822 kubelet[3179]: E0307 01:16:44.541773 3179 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:44.542153 kubelet[3179]: E0307 01:16:44.541907 3179 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79dffb94bc-tv6jk" Mar 7 01:16:44.542248 kubelet[3179]: E0307 01:16:44.542165 3179 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79dffb94bc-tv6jk" Mar 7 01:16:44.542294 kubelet[3179]: E0307 01:16:44.542243 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-79dffb94bc-tv6jk_calico-system(0fb06cd2-739c-4484-8717-b361c9a762dd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-79dffb94bc-tv6jk_calico-system(0fb06cd2-739c-4484-8717-b361c9a762dd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79dffb94bc-tv6jk" podUID="0fb06cd2-739c-4484-8717-b361c9a762dd" Mar 7 01:16:44.545795 containerd[1718]: time="2026-03-07T01:16:44.545755861Z" level=error msg="Failed to destroy network for sandbox \"64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:44.546336 containerd[1718]: time="2026-03-07T01:16:44.546149468Z" level=error msg="encountered an error cleaning up failed sandbox \"64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:44.546336 containerd[1718]: time="2026-03-07T01:16:44.546213769Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c79b8d867-qdz59,Uid:49b855bf-45ba-449a-a595-ffe6b81c323a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:44.549142 kubelet[3179]: E0307 01:16:44.546624 3179 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:16:44.549142 kubelet[3179]: E0307 01:16:44.546678 3179 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5c79b8d867-qdz59" Mar 7 01:16:44.549142 kubelet[3179]: E0307 01:16:44.546705 3179 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5c79b8d867-qdz59" Mar 7 01:16:44.549343 kubelet[3179]: E0307 01:16:44.546770 3179 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c79b8d867-qdz59_calico-system(49b855bf-45ba-449a-a595-ffe6b81c323a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c79b8d867-qdz59_calico-system(49b855bf-45ba-449a-a595-ffe6b81c323a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5c79b8d867-qdz59" podUID="49b855bf-45ba-449a-a595-ffe6b81c323a" Mar 7 01:16:44.572607 containerd[1718]: time="2026-03-07T01:16:44.572560793Z" level=info msg="StartContainer for \"92fe672a76044d4d414821e5ba01d4e35498713b867fd60c0bbd37b259397d51\" returns successfully" Mar 7 01:16:44.901598 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0-shm.mount: Deactivated successfully. Mar 7 01:16:45.324014 kubelet[3179]: I0307 01:16:45.323979 3179 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04" Mar 7 01:16:45.326230 containerd[1718]: time="2026-03-07T01:16:45.325182509Z" level=info msg="StopPodSandbox for \"f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04\"" Mar 7 01:16:45.326230 containerd[1718]: time="2026-03-07T01:16:45.325395312Z" level=info msg="Ensure that sandbox f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04 in task-service has been cleanup successfully" Mar 7 01:16:45.326658 kubelet[3179]: I0307 01:16:45.325619 3179 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807" Mar 7 01:16:45.329626 containerd[1718]: time="2026-03-07T01:16:45.328004754Z" level=info msg="StopPodSandbox for \"64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807\"" Mar 7 01:16:45.329626 containerd[1718]: time="2026-03-07T01:16:45.329363476Z" level=info msg="Ensure that sandbox 64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807 in task-service has been cleanup successfully" Mar 7 01:16:45.334302 kubelet[3179]: I0307 01:16:45.334275 3179 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca" Mar 7 01:16:45.334880 containerd[1718]: time="2026-03-07T01:16:45.334811164Z" level=info msg="StopPodSandbox for \"da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca\"" Mar 7 01:16:45.335514 containerd[1718]: time="2026-03-07T01:16:45.335484475Z" level=info msg="Ensure that sandbox da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca in task-service has been cleanup successfully" Mar 7 01:16:45.339479 kubelet[3179]: I0307 01:16:45.338805 3179 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457" Mar 7 01:16:45.339969 containerd[1718]: time="2026-03-07T01:16:45.339943546Z" level=info msg="StopPodSandbox for \"3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457\"" Mar 7 01:16:45.340322 containerd[1718]: time="2026-03-07T01:16:45.340264852Z" level=info msg="Ensure that sandbox 3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457 in task-service has been cleanup successfully" Mar 7 01:16:45.343389 kubelet[3179]: I0307 01:16:45.343356 3179 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361" Mar 7 01:16:45.344248 containerd[1718]: time="2026-03-07T01:16:45.344225615Z" level=info msg="StopPodSandbox for \"3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361\"" Mar 7 01:16:45.347677 containerd[1718]: time="2026-03-07T01:16:45.347654271Z" level=info msg="Ensure that sandbox 3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361 in task-service has been cleanup successfully" Mar 7 01:16:45.391197 kubelet[3179]: I0307 01:16:45.390645 3179 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7zgkp" podStartSLOduration=5.57946187 podStartE2EDuration="29.390622462s" podCreationTimestamp="2026-03-07 01:16:16 +0000 UTC" firstStartedPulling="2026-03-07 01:16:16.628070862 +0000 UTC m=+21.593240895" lastFinishedPulling="2026-03-07 01:16:40.439231454 +0000 UTC m=+45.404401487" observedRunningTime="2026-03-07 01:16:45.385052673 +0000 UTC m=+50.350222806" watchObservedRunningTime="2026-03-07 01:16:45.390622462 +0000 UTC m=+50.355792595" Mar 7 01:16:45.704007 containerd[1718]: 2026-03-07 01:16:45.506 [INFO][4373] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807" Mar 7 01:16:45.704007 containerd[1718]: 2026-03-07 01:16:45.506 [INFO][4373] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807" iface="eth0" netns="/var/run/netns/cni-71ebe702-e21c-9061-2962-2bfbe2d01128" Mar 7 01:16:45.704007 containerd[1718]: 2026-03-07 01:16:45.507 [INFO][4373] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807" iface="eth0" netns="/var/run/netns/cni-71ebe702-e21c-9061-2962-2bfbe2d01128" Mar 7 01:16:45.704007 containerd[1718]: 2026-03-07 01:16:45.508 [INFO][4373] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807" iface="eth0" netns="/var/run/netns/cni-71ebe702-e21c-9061-2962-2bfbe2d01128" Mar 7 01:16:45.704007 containerd[1718]: 2026-03-07 01:16:45.508 [INFO][4373] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807" Mar 7 01:16:45.704007 containerd[1718]: 2026-03-07 01:16:45.509 [INFO][4373] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807" Mar 7 01:16:45.704007 containerd[1718]: 2026-03-07 01:16:45.664 [INFO][4442] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807" HandleID="k8s-pod-network.64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807" Workload="ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--qdz59-eth0" Mar 7 01:16:45.704007 containerd[1718]: 2026-03-07 01:16:45.665 [INFO][4442] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:16:45.704007 containerd[1718]: 2026-03-07 01:16:45.665 [INFO][4442] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:16:45.704007 containerd[1718]: 2026-03-07 01:16:45.690 [WARNING][4442] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807" HandleID="k8s-pod-network.64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807" Workload="ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--qdz59-eth0" Mar 7 01:16:45.704007 containerd[1718]: 2026-03-07 01:16:45.690 [INFO][4442] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807" HandleID="k8s-pod-network.64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807" Workload="ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--qdz59-eth0" Mar 7 01:16:45.704007 containerd[1718]: 2026-03-07 01:16:45.692 [INFO][4442] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:16:45.704007 containerd[1718]: 2026-03-07 01:16:45.699 [INFO][4373] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807" Mar 7 01:16:45.705299 containerd[1718]: time="2026-03-07T01:16:45.704250311Z" level=info msg="TearDown network for sandbox \"64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807\" successfully" Mar 7 01:16:45.705299 containerd[1718]: time="2026-03-07T01:16:45.704284012Z" level=info msg="StopPodSandbox for \"64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807\" returns successfully" Mar 7 01:16:45.710058 systemd[1]: run-netns-cni\x2d71ebe702\x2de21c\x2d9061\x2d2962\x2d2bfbe2d01128.mount: Deactivated successfully. Mar 7 01:16:45.715846 containerd[1718]: time="2026-03-07T01:16:45.715285489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c79b8d867-qdz59,Uid:49b855bf-45ba-449a-a595-ffe6b81c323a,Namespace:calico-system,Attempt:1,}" Mar 7 01:16:45.718209 containerd[1718]: 2026-03-07 01:16:45.537 [INFO][4405] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca" Mar 7 01:16:45.718209 containerd[1718]: 2026-03-07 01:16:45.538 [INFO][4405] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca" iface="eth0" netns="/var/run/netns/cni-feb13511-a8e0-f664-2118-1e3f88979e06" Mar 7 01:16:45.718209 containerd[1718]: 2026-03-07 01:16:45.539 [INFO][4405] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca" iface="eth0" netns="/var/run/netns/cni-feb13511-a8e0-f664-2118-1e3f88979e06" Mar 7 01:16:45.718209 containerd[1718]: 2026-03-07 01:16:45.545 [INFO][4405] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca" iface="eth0" netns="/var/run/netns/cni-feb13511-a8e0-f664-2118-1e3f88979e06" Mar 7 01:16:45.718209 containerd[1718]: 2026-03-07 01:16:45.546 [INFO][4405] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca" Mar 7 01:16:45.718209 containerd[1718]: 2026-03-07 01:16:45.546 [INFO][4405] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca" Mar 7 01:16:45.718209 containerd[1718]: 2026-03-07 01:16:45.669 [INFO][4450] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca" HandleID="k8s-pod-network.da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca" Workload="ci--4081.3.6--n--1070eafa86-k8s-calico--kube--controllers--79dffb94bc--tv6jk-eth0" Mar 7 01:16:45.718209 containerd[1718]: 2026-03-07 01:16:45.669 [INFO][4450] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:16:45.718209 containerd[1718]: 2026-03-07 01:16:45.693 [INFO][4450] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:16:45.718209 containerd[1718]: 2026-03-07 01:16:45.703 [WARNING][4450] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca" HandleID="k8s-pod-network.da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca" Workload="ci--4081.3.6--n--1070eafa86-k8s-calico--kube--controllers--79dffb94bc--tv6jk-eth0" Mar 7 01:16:45.718209 containerd[1718]: 2026-03-07 01:16:45.703 [INFO][4450] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca" HandleID="k8s-pod-network.da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca" Workload="ci--4081.3.6--n--1070eafa86-k8s-calico--kube--controllers--79dffb94bc--tv6jk-eth0" Mar 7 01:16:45.718209 containerd[1718]: 2026-03-07 01:16:45.707 [INFO][4450] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:16:45.718209 containerd[1718]: 2026-03-07 01:16:45.713 [INFO][4405] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca" Mar 7 01:16:45.723206 systemd[1]: run-netns-cni\x2dfeb13511\x2da8e0\x2df664\x2d2118\x2d1e3f88979e06.mount: Deactivated successfully. Mar 7 01:16:45.724437 containerd[1718]: time="2026-03-07T01:16:45.724389535Z" level=info msg="TearDown network for sandbox \"da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca\" successfully" Mar 7 01:16:45.724588 containerd[1718]: time="2026-03-07T01:16:45.724567338Z" level=info msg="StopPodSandbox for \"da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca\" returns successfully" Mar 7 01:16:45.731791 containerd[1718]: time="2026-03-07T01:16:45.731763054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79dffb94bc-tv6jk,Uid:0fb06cd2-739c-4484-8717-b361c9a762dd,Namespace:calico-system,Attempt:1,}" Mar 7 01:16:45.736812 containerd[1718]: 2026-03-07 01:16:45.542 [INFO][4369] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04" Mar 7 01:16:45.736812 containerd[1718]: 2026-03-07 01:16:45.542 [INFO][4369] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04" iface="eth0" netns="/var/run/netns/cni-ad3566fb-e3cc-d56d-ff14-e20bb686730b" Mar 7 01:16:45.736812 containerd[1718]: 2026-03-07 01:16:45.543 [INFO][4369] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04" iface="eth0" netns="/var/run/netns/cni-ad3566fb-e3cc-d56d-ff14-e20bb686730b" Mar 7 01:16:45.736812 containerd[1718]: 2026-03-07 01:16:45.547 [INFO][4369] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04" iface="eth0" netns="/var/run/netns/cni-ad3566fb-e3cc-d56d-ff14-e20bb686730b" Mar 7 01:16:45.736812 containerd[1718]: 2026-03-07 01:16:45.547 [INFO][4369] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04" Mar 7 01:16:45.736812 containerd[1718]: 2026-03-07 01:16:45.547 [INFO][4369] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04" Mar 7 01:16:45.736812 containerd[1718]: 2026-03-07 01:16:45.675 [INFO][4451] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04" HandleID="k8s-pod-network.f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04" Workload="ci--4081.3.6--n--1070eafa86-k8s-whisker--6c4858b6b8--t58mf-eth0" Mar 7 01:16:45.736812 containerd[1718]: 2026-03-07 01:16:45.675 [INFO][4451] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:16:45.736812 containerd[1718]: 2026-03-07 01:16:45.707 [INFO][4451] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:16:45.736812 containerd[1718]: 2026-03-07 01:16:45.727 [WARNING][4451] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04" HandleID="k8s-pod-network.f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04" Workload="ci--4081.3.6--n--1070eafa86-k8s-whisker--6c4858b6b8--t58mf-eth0" Mar 7 01:16:45.736812 containerd[1718]: 2026-03-07 01:16:45.727 [INFO][4451] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04" HandleID="k8s-pod-network.f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04" Workload="ci--4081.3.6--n--1070eafa86-k8s-whisker--6c4858b6b8--t58mf-eth0" Mar 7 01:16:45.736812 containerd[1718]: 2026-03-07 01:16:45.729 [INFO][4451] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:16:45.736812 containerd[1718]: 2026-03-07 01:16:45.735 [INFO][4369] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04" Mar 7 01:16:45.737885 containerd[1718]: time="2026-03-07T01:16:45.737765451Z" level=info msg="TearDown network for sandbox \"f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04\" successfully" Mar 7 01:16:45.737885 containerd[1718]: time="2026-03-07T01:16:45.737793951Z" level=info msg="StopPodSandbox for \"f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04\" returns successfully" Mar 7 01:16:45.742435 systemd[1]: run-netns-cni\x2dad3566fb\x2de3cc\x2dd56d\x2dff14\x2de20bb686730b.mount: Deactivated successfully. Mar 7 01:16:45.758654 containerd[1718]: 2026-03-07 01:16:45.607 [INFO][4404] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361" Mar 7 01:16:45.758654 containerd[1718]: 2026-03-07 01:16:45.608 [INFO][4404] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361" iface="eth0" netns="/var/run/netns/cni-652dfc50-704d-59b9-2669-9113baa07fa1" Mar 7 01:16:45.758654 containerd[1718]: 2026-03-07 01:16:45.609 [INFO][4404] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361" iface="eth0" netns="/var/run/netns/cni-652dfc50-704d-59b9-2669-9113baa07fa1" Mar 7 01:16:45.758654 containerd[1718]: 2026-03-07 01:16:45.611 [INFO][4404] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361" iface="eth0" netns="/var/run/netns/cni-652dfc50-704d-59b9-2669-9113baa07fa1" Mar 7 01:16:45.758654 containerd[1718]: 2026-03-07 01:16:45.611 [INFO][4404] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361" Mar 7 01:16:45.758654 containerd[1718]: 2026-03-07 01:16:45.611 [INFO][4404] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361" Mar 7 01:16:45.758654 containerd[1718]: 2026-03-07 01:16:45.678 [INFO][4466] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361" HandleID="k8s-pod-network.3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361" Workload="ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--fv9qs-eth0" Mar 7 01:16:45.758654 containerd[1718]: 2026-03-07 01:16:45.682 [INFO][4466] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:16:45.758654 containerd[1718]: 2026-03-07 01:16:45.729 [INFO][4466] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:16:45.758654 containerd[1718]: 2026-03-07 01:16:45.749 [WARNING][4466] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361" HandleID="k8s-pod-network.3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361" Workload="ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--fv9qs-eth0" Mar 7 01:16:45.758654 containerd[1718]: 2026-03-07 01:16:45.749 [INFO][4466] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361" HandleID="k8s-pod-network.3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361" Workload="ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--fv9qs-eth0" Mar 7 01:16:45.758654 containerd[1718]: 2026-03-07 01:16:45.751 [INFO][4466] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:16:45.758654 containerd[1718]: 2026-03-07 01:16:45.755 [INFO][4404] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361" Mar 7 01:16:45.759826 containerd[1718]: time="2026-03-07T01:16:45.758814490Z" level=info msg="TearDown network for sandbox \"3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361\" successfully" Mar 7 01:16:45.759826 containerd[1718]: time="2026-03-07T01:16:45.758840890Z" level=info msg="StopPodSandbox for \"3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361\" returns successfully" Mar 7 01:16:45.768467 containerd[1718]: time="2026-03-07T01:16:45.767147124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fv9qs,Uid:99a410ce-35f7-4fd4-9422-35a8b99f1549,Namespace:kube-system,Attempt:1,}" Mar 7 01:16:45.768805 containerd[1718]: 2026-03-07 01:16:45.575 [INFO][4403] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457" Mar 7 01:16:45.768805 containerd[1718]: 2026-03-07 01:16:45.575 [INFO][4403] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457" iface="eth0" netns="/var/run/netns/cni-cf418ad4-68f7-ddf7-d691-9c0a1bfd6347" Mar 7 01:16:45.768805 containerd[1718]: 2026-03-07 01:16:45.576 [INFO][4403] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457" iface="eth0" netns="/var/run/netns/cni-cf418ad4-68f7-ddf7-d691-9c0a1bfd6347" Mar 7 01:16:45.768805 containerd[1718]: 2026-03-07 01:16:45.576 [INFO][4403] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457" iface="eth0" netns="/var/run/netns/cni-cf418ad4-68f7-ddf7-d691-9c0a1bfd6347" Mar 7 01:16:45.768805 containerd[1718]: 2026-03-07 01:16:45.576 [INFO][4403] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457" Mar 7 01:16:45.768805 containerd[1718]: 2026-03-07 01:16:45.576 [INFO][4403] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457" Mar 7 01:16:45.768805 containerd[1718]: 2026-03-07 01:16:45.690 [INFO][4461] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457" HandleID="k8s-pod-network.3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457" Workload="ci--4081.3.6--n--1070eafa86-k8s-goldmane--cccfbd5cf--4f4st-eth0" Mar 7 01:16:45.768805 containerd[1718]: 2026-03-07 01:16:45.690 [INFO][4461] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:16:45.768805 containerd[1718]: 2026-03-07 01:16:45.751 [INFO][4461] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:16:45.768805 containerd[1718]: 2026-03-07 01:16:45.762 [WARNING][4461] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457" HandleID="k8s-pod-network.3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457" Workload="ci--4081.3.6--n--1070eafa86-k8s-goldmane--cccfbd5cf--4f4st-eth0" Mar 7 01:16:45.768805 containerd[1718]: 2026-03-07 01:16:45.762 [INFO][4461] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457" HandleID="k8s-pod-network.3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457" Workload="ci--4081.3.6--n--1070eafa86-k8s-goldmane--cccfbd5cf--4f4st-eth0" Mar 7 01:16:45.768805 containerd[1718]: 2026-03-07 01:16:45.763 [INFO][4461] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:16:45.768805 containerd[1718]: 2026-03-07 01:16:45.766 [INFO][4403] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457" Mar 7 01:16:45.770694 containerd[1718]: time="2026-03-07T01:16:45.768913452Z" level=info msg="TearDown network for sandbox \"3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457\" successfully" Mar 7 01:16:45.770694 containerd[1718]: time="2026-03-07T01:16:45.768936152Z" level=info msg="StopPodSandbox for \"3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457\" returns successfully" Mar 7 01:16:45.789139 containerd[1718]: time="2026-03-07T01:16:45.788864073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-4f4st,Uid:e6dff6a9-2fcc-4331-a1f3-940a7614c300,Namespace:calico-system,Attempt:1,}" Mar 7 01:16:45.888325 systemd[1]: run-netns-cni\x2dcf418ad4\x2d68f7\x2dddf7\x2dd691\x2d9c0a1bfd6347.mount: Deactivated successfully. Mar 7 01:16:45.888432 systemd[1]: run-netns-cni\x2d652dfc50\x2d704d\x2d59b9\x2d2669\x2d9113baa07fa1.mount: Deactivated successfully. Mar 7 01:16:45.901642 kubelet[3179]: I0307 01:16:45.900399 3179 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cplj2\" (UniqueName: \"kubernetes.io/projected/07474d87-ed17-46c3-afc1-e14a04645e92-kube-api-access-cplj2\") pod \"07474d87-ed17-46c3-afc1-e14a04645e92\" (UID: \"07474d87-ed17-46c3-afc1-e14a04645e92\") " Mar 7 01:16:45.901642 kubelet[3179]: I0307 01:16:45.900453 3179 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07474d87-ed17-46c3-afc1-e14a04645e92-whisker-ca-bundle\") pod \"07474d87-ed17-46c3-afc1-e14a04645e92\" (UID: \"07474d87-ed17-46c3-afc1-e14a04645e92\") " Mar 7 01:16:45.901642 kubelet[3179]: I0307 01:16:45.900482 3179 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/07474d87-ed17-46c3-afc1-e14a04645e92-nginx-config\") pod \"07474d87-ed17-46c3-afc1-e14a04645e92\" (UID: \"07474d87-ed17-46c3-afc1-e14a04645e92\") " Mar 7 01:16:45.901642 kubelet[3179]: I0307 01:16:45.900511 3179 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/07474d87-ed17-46c3-afc1-e14a04645e92-whisker-backend-key-pair\") pod \"07474d87-ed17-46c3-afc1-e14a04645e92\" (UID: \"07474d87-ed17-46c3-afc1-e14a04645e92\") " Mar 7 01:16:45.908856 kubelet[3179]: I0307 01:16:45.906685 3179 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07474d87-ed17-46c3-afc1-e14a04645e92-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "07474d87-ed17-46c3-afc1-e14a04645e92" (UID: "07474d87-ed17-46c3-afc1-e14a04645e92"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 01:16:45.909064 kubelet[3179]: I0307 01:16:45.908931 3179 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07474d87-ed17-46c3-afc1-e14a04645e92-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "07474d87-ed17-46c3-afc1-e14a04645e92" (UID: "07474d87-ed17-46c3-afc1-e14a04645e92"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 01:16:45.916781 systemd[1]: var-lib-kubelet-pods-07474d87\x2ded17\x2d46c3\x2dafc1\x2de14a04645e92-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcplj2.mount: Deactivated successfully. Mar 7 01:16:45.918332 kubelet[3179]: I0307 01:16:45.918271 3179 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07474d87-ed17-46c3-afc1-e14a04645e92-kube-api-access-cplj2" (OuterVolumeSpecName: "kube-api-access-cplj2") pod "07474d87-ed17-46c3-afc1-e14a04645e92" (UID: "07474d87-ed17-46c3-afc1-e14a04645e92"). InnerVolumeSpecName "kube-api-access-cplj2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 01:16:45.920702 kubelet[3179]: I0307 01:16:45.920455 3179 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07474d87-ed17-46c3-afc1-e14a04645e92-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "07474d87-ed17-46c3-afc1-e14a04645e92" (UID: "07474d87-ed17-46c3-afc1-e14a04645e92"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 7 01:16:45.927191 systemd[1]: var-lib-kubelet-pods-07474d87\x2ded17\x2d46c3\x2dafc1\x2de14a04645e92-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 7 01:16:46.001543 kubelet[3179]: I0307 01:16:46.001331 3179 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cplj2\" (UniqueName: \"kubernetes.io/projected/07474d87-ed17-46c3-afc1-e14a04645e92-kube-api-access-cplj2\") on node \"ci-4081.3.6-n-1070eafa86\" DevicePath \"\"" Mar 7 01:16:46.001543 kubelet[3179]: I0307 01:16:46.001387 3179 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07474d87-ed17-46c3-afc1-e14a04645e92-whisker-ca-bundle\") on node \"ci-4081.3.6-n-1070eafa86\" DevicePath \"\"" Mar 7 01:16:46.001543 kubelet[3179]: I0307 01:16:46.001407 3179 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/07474d87-ed17-46c3-afc1-e14a04645e92-nginx-config\") on node \"ci-4081.3.6-n-1070eafa86\" DevicePath \"\"" Mar 7 01:16:46.001543 kubelet[3179]: I0307 01:16:46.001421 3179 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/07474d87-ed17-46c3-afc1-e14a04645e92-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-1070eafa86\" DevicePath \"\"" Mar 7 01:16:46.207978 systemd-networkd[1367]: calic508c6e5693: Link UP Mar 7 01:16:46.208468 systemd-networkd[1367]: calic508c6e5693: Gained carrier Mar 7 01:16:46.247173 containerd[1718]: 2026-03-07 01:16:45.823 [ERROR][4481] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:16:46.247173 containerd[1718]: 2026-03-07 01:16:45.843 [INFO][4481] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--qdz59-eth0 calico-apiserver-5c79b8d867- calico-system 49b855bf-45ba-449a-a595-ffe6b81c323a 907 0 2026-03-07 01:16:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5c79b8d867 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-1070eafa86 calico-apiserver-5c79b8d867-qdz59 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calic508c6e5693 [] [] }} ContainerID="6278c3a4e17a118387005cf275ecfe8cbbc17f4b7b915890797a553b19f653d0" Namespace="calico-system" Pod="calico-apiserver-5c79b8d867-qdz59" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--qdz59-" Mar 7 01:16:46.247173 containerd[1718]: 2026-03-07 01:16:45.843 [INFO][4481] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6278c3a4e17a118387005cf275ecfe8cbbc17f4b7b915890797a553b19f653d0" Namespace="calico-system" Pod="calico-apiserver-5c79b8d867-qdz59" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--qdz59-eth0" Mar 7 01:16:46.247173 containerd[1718]: 2026-03-07 01:16:46.059 [INFO][4503] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6278c3a4e17a118387005cf275ecfe8cbbc17f4b7b915890797a553b19f653d0" HandleID="k8s-pod-network.6278c3a4e17a118387005cf275ecfe8cbbc17f4b7b915890797a553b19f653d0" Workload="ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--qdz59-eth0" Mar 7 01:16:46.247173 containerd[1718]: 2026-03-07 01:16:46.072 [INFO][4503] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6278c3a4e17a118387005cf275ecfe8cbbc17f4b7b915890797a553b19f653d0" HandleID="k8s-pod-network.6278c3a4e17a118387005cf275ecfe8cbbc17f4b7b915890797a553b19f653d0" Workload="ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--qdz59-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e0c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-1070eafa86", "pod":"calico-apiserver-5c79b8d867-qdz59", "timestamp":"2026-03-07 01:16:46.059824235 +0000 UTC"}, Hostname:"ci-4081.3.6-n-1070eafa86", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000226000)} Mar 7 01:16:46.247173 containerd[1718]: 2026-03-07 01:16:46.072 [INFO][4503] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:16:46.247173 containerd[1718]: 2026-03-07 01:16:46.072 [INFO][4503] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:16:46.247173 containerd[1718]: 2026-03-07 01:16:46.072 [INFO][4503] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-1070eafa86' Mar 7 01:16:46.247173 containerd[1718]: 2026-03-07 01:16:46.077 [INFO][4503] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6278c3a4e17a118387005cf275ecfe8cbbc17f4b7b915890797a553b19f653d0" host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:46.247173 containerd[1718]: 2026-03-07 01:16:46.083 [INFO][4503] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:46.247173 containerd[1718]: 2026-03-07 01:16:46.091 [INFO][4503] ipam/ipam.go 526: Trying affinity for 192.168.33.192/26 host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:46.247173 containerd[1718]: 2026-03-07 01:16:46.097 [INFO][4503] ipam/ipam.go 160: Attempting to load block cidr=192.168.33.192/26 host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:46.247173 containerd[1718]: 2026-03-07 01:16:46.101 [INFO][4503] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.33.192/26 host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:46.247173 containerd[1718]: 2026-03-07 01:16:46.101 [INFO][4503] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.33.192/26 handle="k8s-pod-network.6278c3a4e17a118387005cf275ecfe8cbbc17f4b7b915890797a553b19f653d0" host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:46.247173 containerd[1718]: 2026-03-07 01:16:46.104 [INFO][4503] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6278c3a4e17a118387005cf275ecfe8cbbc17f4b7b915890797a553b19f653d0 Mar 7 01:16:46.247173 containerd[1718]: 2026-03-07 01:16:46.116 [INFO][4503] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.33.192/26 handle="k8s-pod-network.6278c3a4e17a118387005cf275ecfe8cbbc17f4b7b915890797a553b19f653d0" host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:46.247173 containerd[1718]: 2026-03-07 01:16:46.124 [INFO][4503] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.33.193/26] block=192.168.33.192/26 handle="k8s-pod-network.6278c3a4e17a118387005cf275ecfe8cbbc17f4b7b915890797a553b19f653d0" host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:46.247173 containerd[1718]: 2026-03-07 01:16:46.124 [INFO][4503] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.33.193/26] handle="k8s-pod-network.6278c3a4e17a118387005cf275ecfe8cbbc17f4b7b915890797a553b19f653d0" host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:46.247173 containerd[1718]: 2026-03-07 01:16:46.124 [INFO][4503] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:16:46.247173 containerd[1718]: 2026-03-07 01:16:46.124 [INFO][4503] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.33.193/26] IPv6=[] ContainerID="6278c3a4e17a118387005cf275ecfe8cbbc17f4b7b915890797a553b19f653d0" HandleID="k8s-pod-network.6278c3a4e17a118387005cf275ecfe8cbbc17f4b7b915890797a553b19f653d0" Workload="ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--qdz59-eth0" Mar 7 01:16:46.248874 containerd[1718]: 2026-03-07 01:16:46.133 [INFO][4481] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6278c3a4e17a118387005cf275ecfe8cbbc17f4b7b915890797a553b19f653d0" Namespace="calico-system" Pod="calico-apiserver-5c79b8d867-qdz59" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--qdz59-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--qdz59-eth0", GenerateName:"calico-apiserver-5c79b8d867-", Namespace:"calico-system", SelfLink:"", UID:"49b855bf-45ba-449a-a595-ffe6b81c323a", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c79b8d867", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-1070eafa86", ContainerID:"", Pod:"calico-apiserver-5c79b8d867-qdz59", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calic508c6e5693", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:16:46.248874 containerd[1718]: 2026-03-07 01:16:46.134 [INFO][4481] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.193/32] ContainerID="6278c3a4e17a118387005cf275ecfe8cbbc17f4b7b915890797a553b19f653d0" Namespace="calico-system" Pod="calico-apiserver-5c79b8d867-qdz59" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--qdz59-eth0" Mar 7 01:16:46.248874 containerd[1718]: 2026-03-07 01:16:46.134 [INFO][4481] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic508c6e5693 ContainerID="6278c3a4e17a118387005cf275ecfe8cbbc17f4b7b915890797a553b19f653d0" Namespace="calico-system" Pod="calico-apiserver-5c79b8d867-qdz59" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--qdz59-eth0" Mar 7 01:16:46.248874 containerd[1718]: 2026-03-07 01:16:46.209 [INFO][4481] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6278c3a4e17a118387005cf275ecfe8cbbc17f4b7b915890797a553b19f653d0" Namespace="calico-system" Pod="calico-apiserver-5c79b8d867-qdz59" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--qdz59-eth0" Mar 7 01:16:46.248874 containerd[1718]: 2026-03-07 01:16:46.219 [INFO][4481] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6278c3a4e17a118387005cf275ecfe8cbbc17f4b7b915890797a553b19f653d0" Namespace="calico-system" Pod="calico-apiserver-5c79b8d867-qdz59" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--qdz59-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--qdz59-eth0", GenerateName:"calico-apiserver-5c79b8d867-", Namespace:"calico-system", SelfLink:"", UID:"49b855bf-45ba-449a-a595-ffe6b81c323a", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c79b8d867", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-1070eafa86", ContainerID:"6278c3a4e17a118387005cf275ecfe8cbbc17f4b7b915890797a553b19f653d0", Pod:"calico-apiserver-5c79b8d867-qdz59", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calic508c6e5693", MAC:"3a:be:79:60:78:11", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:16:46.248874 containerd[1718]: 2026-03-07 01:16:46.241 [INFO][4481] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6278c3a4e17a118387005cf275ecfe8cbbc17f4b7b915890797a553b19f653d0" Namespace="calico-system" Pod="calico-apiserver-5c79b8d867-qdz59" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--qdz59-eth0" Mar 7 01:16:46.271840 systemd-networkd[1367]: cali629816fcdd9: Link UP Mar 7 01:16:46.280411 systemd-networkd[1367]: cali629816fcdd9: Gained carrier Mar 7 01:16:46.314427 containerd[1718]: 2026-03-07 01:16:45.856 [ERROR][4491] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:16:46.314427 containerd[1718]: 2026-03-07 01:16:45.930 [INFO][4491] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--1070eafa86-k8s-calico--kube--controllers--79dffb94bc--tv6jk-eth0 calico-kube-controllers-79dffb94bc- calico-system 0fb06cd2-739c-4484-8717-b361c9a762dd 908 0 2026-03-07 01:16:16 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:79dffb94bc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-1070eafa86 calico-kube-controllers-79dffb94bc-tv6jk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali629816fcdd9 [] [] }} ContainerID="f570ae393034332eecdafec1d159de21d274d79339617e86544b66be74e3fab9" Namespace="calico-system" Pod="calico-kube-controllers-79dffb94bc-tv6jk" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-calico--kube--controllers--79dffb94bc--tv6jk-" Mar 7 01:16:46.314427 containerd[1718]: 2026-03-07 01:16:45.930 [INFO][4491] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f570ae393034332eecdafec1d159de21d274d79339617e86544b66be74e3fab9" Namespace="calico-system" Pod="calico-kube-controllers-79dffb94bc-tv6jk" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-calico--kube--controllers--79dffb94bc--tv6jk-eth0" Mar 7 01:16:46.314427 containerd[1718]: 2026-03-07 01:16:46.147 [INFO][4549] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f570ae393034332eecdafec1d159de21d274d79339617e86544b66be74e3fab9" HandleID="k8s-pod-network.f570ae393034332eecdafec1d159de21d274d79339617e86544b66be74e3fab9" Workload="ci--4081.3.6--n--1070eafa86-k8s-calico--kube--controllers--79dffb94bc--tv6jk-eth0" Mar 7 01:16:46.314427 containerd[1718]: 2026-03-07 01:16:46.170 [INFO][4549] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f570ae393034332eecdafec1d159de21d274d79339617e86544b66be74e3fab9" HandleID="k8s-pod-network.f570ae393034332eecdafec1d159de21d274d79339617e86544b66be74e3fab9" Workload="ci--4081.3.6--n--1070eafa86-k8s-calico--kube--controllers--79dffb94bc--tv6jk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003fbbf0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-1070eafa86", "pod":"calico-kube-controllers-79dffb94bc-tv6jk", "timestamp":"2026-03-07 01:16:46.147826352 +0000 UTC"}, Hostname:"ci-4081.3.6-n-1070eafa86", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000ae000)} Mar 7 01:16:46.314427 containerd[1718]: 2026-03-07 01:16:46.171 [INFO][4549] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:16:46.314427 containerd[1718]: 2026-03-07 01:16:46.171 [INFO][4549] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:16:46.314427 containerd[1718]: 2026-03-07 01:16:46.171 [INFO][4549] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-1070eafa86' Mar 7 01:16:46.314427 containerd[1718]: 2026-03-07 01:16:46.177 [INFO][4549] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f570ae393034332eecdafec1d159de21d274d79339617e86544b66be74e3fab9" host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:46.314427 containerd[1718]: 2026-03-07 01:16:46.187 [INFO][4549] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:46.314427 containerd[1718]: 2026-03-07 01:16:46.198 [INFO][4549] ipam/ipam.go 526: Trying affinity for 192.168.33.192/26 host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:46.314427 containerd[1718]: 2026-03-07 01:16:46.201 [INFO][4549] ipam/ipam.go 160: Attempting to load block cidr=192.168.33.192/26 host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:46.314427 containerd[1718]: 2026-03-07 01:16:46.211 [INFO][4549] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.33.192/26 host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:46.314427 containerd[1718]: 2026-03-07 01:16:46.213 [INFO][4549] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.33.192/26 handle="k8s-pod-network.f570ae393034332eecdafec1d159de21d274d79339617e86544b66be74e3fab9" host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:46.314427 containerd[1718]: 2026-03-07 01:16:46.218 [INFO][4549] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f570ae393034332eecdafec1d159de21d274d79339617e86544b66be74e3fab9 Mar 7 01:16:46.314427 containerd[1718]: 2026-03-07 01:16:46.233 [INFO][4549] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.33.192/26 handle="k8s-pod-network.f570ae393034332eecdafec1d159de21d274d79339617e86544b66be74e3fab9" host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:46.314427 containerd[1718]: 2026-03-07 01:16:46.252 [INFO][4549] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.33.194/26] block=192.168.33.192/26 handle="k8s-pod-network.f570ae393034332eecdafec1d159de21d274d79339617e86544b66be74e3fab9" host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:46.314427 containerd[1718]: 2026-03-07 01:16:46.252 [INFO][4549] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.33.194/26] handle="k8s-pod-network.f570ae393034332eecdafec1d159de21d274d79339617e86544b66be74e3fab9" host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:46.314427 containerd[1718]: 2026-03-07 01:16:46.252 [INFO][4549] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:16:46.314427 containerd[1718]: 2026-03-07 01:16:46.252 [INFO][4549] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.33.194/26] IPv6=[] ContainerID="f570ae393034332eecdafec1d159de21d274d79339617e86544b66be74e3fab9" HandleID="k8s-pod-network.f570ae393034332eecdafec1d159de21d274d79339617e86544b66be74e3fab9" Workload="ci--4081.3.6--n--1070eafa86-k8s-calico--kube--controllers--79dffb94bc--tv6jk-eth0" Mar 7 01:16:46.317949 containerd[1718]: 2026-03-07 01:16:46.260 [INFO][4491] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f570ae393034332eecdafec1d159de21d274d79339617e86544b66be74e3fab9" Namespace="calico-system" Pod="calico-kube-controllers-79dffb94bc-tv6jk" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-calico--kube--controllers--79dffb94bc--tv6jk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--1070eafa86-k8s-calico--kube--controllers--79dffb94bc--tv6jk-eth0", GenerateName:"calico-kube-controllers-79dffb94bc-", Namespace:"calico-system", SelfLink:"", UID:"0fb06cd2-739c-4484-8717-b361c9a762dd", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79dffb94bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-1070eafa86", ContainerID:"", Pod:"calico-kube-controllers-79dffb94bc-tv6jk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.33.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali629816fcdd9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:16:46.317949 containerd[1718]: 2026-03-07 01:16:46.260 [INFO][4491] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.194/32] ContainerID="f570ae393034332eecdafec1d159de21d274d79339617e86544b66be74e3fab9" Namespace="calico-system" Pod="calico-kube-controllers-79dffb94bc-tv6jk" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-calico--kube--controllers--79dffb94bc--tv6jk-eth0" Mar 7 01:16:46.317949 containerd[1718]: 2026-03-07 01:16:46.260 [INFO][4491] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali629816fcdd9 ContainerID="f570ae393034332eecdafec1d159de21d274d79339617e86544b66be74e3fab9" Namespace="calico-system" Pod="calico-kube-controllers-79dffb94bc-tv6jk" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-calico--kube--controllers--79dffb94bc--tv6jk-eth0" Mar 7 01:16:46.317949 containerd[1718]: 2026-03-07 01:16:46.280 [INFO][4491] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f570ae393034332eecdafec1d159de21d274d79339617e86544b66be74e3fab9" Namespace="calico-system" Pod="calico-kube-controllers-79dffb94bc-tv6jk" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-calico--kube--controllers--79dffb94bc--tv6jk-eth0" Mar 7 01:16:46.317949 containerd[1718]: 2026-03-07 01:16:46.284 [INFO][4491] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f570ae393034332eecdafec1d159de21d274d79339617e86544b66be74e3fab9" Namespace="calico-system" Pod="calico-kube-controllers-79dffb94bc-tv6jk" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-calico--kube--controllers--79dffb94bc--tv6jk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--1070eafa86-k8s-calico--kube--controllers--79dffb94bc--tv6jk-eth0", GenerateName:"calico-kube-controllers-79dffb94bc-", Namespace:"calico-system", SelfLink:"", UID:"0fb06cd2-739c-4484-8717-b361c9a762dd", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79dffb94bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-1070eafa86", ContainerID:"f570ae393034332eecdafec1d159de21d274d79339617e86544b66be74e3fab9", Pod:"calico-kube-controllers-79dffb94bc-tv6jk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.33.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali629816fcdd9", MAC:"b2:6f:21:14:f8:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:16:46.317949 containerd[1718]: 2026-03-07 01:16:46.306 [INFO][4491] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f570ae393034332eecdafec1d159de21d274d79339617e86544b66be74e3fab9" Namespace="calico-system" Pod="calico-kube-controllers-79dffb94bc-tv6jk" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-calico--kube--controllers--79dffb94bc--tv6jk-eth0" Mar 7 01:16:46.339882 containerd[1718]: time="2026-03-07T01:16:46.339059430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:16:46.339882 containerd[1718]: time="2026-03-07T01:16:46.339148332Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:16:46.339882 containerd[1718]: time="2026-03-07T01:16:46.339194533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:46.339882 containerd[1718]: time="2026-03-07T01:16:46.339322835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:46.374187 systemd[1]: Removed slice kubepods-besteffort-pod07474d87_ed17_46c3_afc1_e14a04645e92.slice - libcontainer container kubepods-besteffort-pod07474d87_ed17_46c3_afc1_e14a04645e92.slice. Mar 7 01:16:46.423311 systemd-networkd[1367]: cali9e98e4f8129: Link UP Mar 7 01:16:46.448329 systemd-networkd[1367]: cali9e98e4f8129: Gained carrier Mar 7 01:16:46.480689 systemd[1]: Started cri-containerd-6278c3a4e17a118387005cf275ecfe8cbbc17f4b7b915890797a553b19f653d0.scope - libcontainer container 6278c3a4e17a118387005cf275ecfe8cbbc17f4b7b915890797a553b19f653d0. Mar 7 01:16:46.488947 containerd[1718]: time="2026-03-07T01:16:46.486067797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:16:46.488947 containerd[1718]: time="2026-03-07T01:16:46.487966428Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:16:46.490834 containerd[1718]: time="2026-03-07T01:16:46.488149331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:46.490834 containerd[1718]: time="2026-03-07T01:16:46.489300149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:46.494586 containerd[1718]: 2026-03-07 01:16:46.019 [ERROR][4509] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:16:46.494586 containerd[1718]: 2026-03-07 01:16:46.061 [INFO][4509] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--1070eafa86-k8s-goldmane--cccfbd5cf--4f4st-eth0 goldmane-cccfbd5cf- calico-system e6dff6a9-2fcc-4331-a1f3-940a7614c300 910 0 2026-03-07 01:16:15 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-1070eafa86 goldmane-cccfbd5cf-4f4st eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali9e98e4f8129 [] [] }} ContainerID="9d1965fca67dd25290b973f19484bf3ff611edd0cc6a84f779356c7008a25306" Namespace="calico-system" Pod="goldmane-cccfbd5cf-4f4st" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-goldmane--cccfbd5cf--4f4st-" Mar 7 01:16:46.494586 containerd[1718]: 2026-03-07 01:16:46.061 [INFO][4509] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9d1965fca67dd25290b973f19484bf3ff611edd0cc6a84f779356c7008a25306" Namespace="calico-system" Pod="goldmane-cccfbd5cf-4f4st" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-goldmane--cccfbd5cf--4f4st-eth0" Mar 7 01:16:46.494586 containerd[1718]: 2026-03-07 01:16:46.239 [INFO][4593] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9d1965fca67dd25290b973f19484bf3ff611edd0cc6a84f779356c7008a25306" HandleID="k8s-pod-network.9d1965fca67dd25290b973f19484bf3ff611edd0cc6a84f779356c7008a25306" Workload="ci--4081.3.6--n--1070eafa86-k8s-goldmane--cccfbd5cf--4f4st-eth0" Mar 7 01:16:46.494586 containerd[1718]: 2026-03-07 01:16:46.255 [INFO][4593] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9d1965fca67dd25290b973f19484bf3ff611edd0cc6a84f779356c7008a25306" HandleID="k8s-pod-network.9d1965fca67dd25290b973f19484bf3ff611edd0cc6a84f779356c7008a25306" Workload="ci--4081.3.6--n--1070eafa86-k8s-goldmane--cccfbd5cf--4f4st-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000122700), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-1070eafa86", "pod":"goldmane-cccfbd5cf-4f4st", "timestamp":"2026-03-07 01:16:46.239265824 +0000 UTC"}, Hostname:"ci-4081.3.6-n-1070eafa86", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003ce2c0)} Mar 7 01:16:46.494586 containerd[1718]: 2026-03-07 01:16:46.255 [INFO][4593] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:16:46.494586 containerd[1718]: 2026-03-07 01:16:46.255 [INFO][4593] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:16:46.494586 containerd[1718]: 2026-03-07 01:16:46.255 [INFO][4593] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-1070eafa86' Mar 7 01:16:46.494586 containerd[1718]: 2026-03-07 01:16:46.277 [INFO][4593] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9d1965fca67dd25290b973f19484bf3ff611edd0cc6a84f779356c7008a25306" host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:46.494586 containerd[1718]: 2026-03-07 01:16:46.292 [INFO][4593] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:46.494586 containerd[1718]: 2026-03-07 01:16:46.312 [INFO][4593] ipam/ipam.go 526: Trying affinity for 192.168.33.192/26 host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:46.494586 containerd[1718]: 2026-03-07 01:16:46.318 [INFO][4593] ipam/ipam.go 160: Attempting to load block cidr=192.168.33.192/26 host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:46.494586 containerd[1718]: 2026-03-07 01:16:46.324 [INFO][4593] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.33.192/26 host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:46.494586 containerd[1718]: 2026-03-07 01:16:46.324 [INFO][4593] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.33.192/26 handle="k8s-pod-network.9d1965fca67dd25290b973f19484bf3ff611edd0cc6a84f779356c7008a25306" host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:46.494586 containerd[1718]: 2026-03-07 01:16:46.330 [INFO][4593] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9d1965fca67dd25290b973f19484bf3ff611edd0cc6a84f779356c7008a25306 Mar 7 01:16:46.494586 containerd[1718]: 2026-03-07 01:16:46.339 [INFO][4593] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.33.192/26 handle="k8s-pod-network.9d1965fca67dd25290b973f19484bf3ff611edd0cc6a84f779356c7008a25306" host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:46.494586 containerd[1718]: 2026-03-07 01:16:46.371 [INFO][4593] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.33.195/26] block=192.168.33.192/26 handle="k8s-pod-network.9d1965fca67dd25290b973f19484bf3ff611edd0cc6a84f779356c7008a25306" host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:46.494586 containerd[1718]: 2026-03-07 01:16:46.371 [INFO][4593] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.33.195/26] handle="k8s-pod-network.9d1965fca67dd25290b973f19484bf3ff611edd0cc6a84f779356c7008a25306" host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:46.494586 containerd[1718]: 2026-03-07 01:16:46.371 [INFO][4593] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:16:46.494586 containerd[1718]: 2026-03-07 01:16:46.372 [INFO][4593] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.33.195/26] IPv6=[] ContainerID="9d1965fca67dd25290b973f19484bf3ff611edd0cc6a84f779356c7008a25306" HandleID="k8s-pod-network.9d1965fca67dd25290b973f19484bf3ff611edd0cc6a84f779356c7008a25306" Workload="ci--4081.3.6--n--1070eafa86-k8s-goldmane--cccfbd5cf--4f4st-eth0" Mar 7 01:16:46.496016 containerd[1718]: 2026-03-07 01:16:46.391 [INFO][4509] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9d1965fca67dd25290b973f19484bf3ff611edd0cc6a84f779356c7008a25306" Namespace="calico-system" Pod="goldmane-cccfbd5cf-4f4st" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-goldmane--cccfbd5cf--4f4st-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--1070eafa86-k8s-goldmane--cccfbd5cf--4f4st-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"e6dff6a9-2fcc-4331-a1f3-940a7614c300", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-1070eafa86", ContainerID:"", Pod:"goldmane-cccfbd5cf-4f4st", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.33.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9e98e4f8129", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:16:46.496016 containerd[1718]: 2026-03-07 01:16:46.403 [INFO][4509] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.195/32] ContainerID="9d1965fca67dd25290b973f19484bf3ff611edd0cc6a84f779356c7008a25306" Namespace="calico-system" Pod="goldmane-cccfbd5cf-4f4st" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-goldmane--cccfbd5cf--4f4st-eth0" Mar 7 01:16:46.496016 containerd[1718]: 2026-03-07 01:16:46.403 [INFO][4509] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9e98e4f8129 ContainerID="9d1965fca67dd25290b973f19484bf3ff611edd0cc6a84f779356c7008a25306" Namespace="calico-system" Pod="goldmane-cccfbd5cf-4f4st" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-goldmane--cccfbd5cf--4f4st-eth0" Mar 7 01:16:46.496016 containerd[1718]: 2026-03-07 01:16:46.450 [INFO][4509] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9d1965fca67dd25290b973f19484bf3ff611edd0cc6a84f779356c7008a25306" Namespace="calico-system" Pod="goldmane-cccfbd5cf-4f4st" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-goldmane--cccfbd5cf--4f4st-eth0" Mar 7 01:16:46.496016 containerd[1718]: 2026-03-07 01:16:46.453 [INFO][4509] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9d1965fca67dd25290b973f19484bf3ff611edd0cc6a84f779356c7008a25306" Namespace="calico-system" Pod="goldmane-cccfbd5cf-4f4st" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-goldmane--cccfbd5cf--4f4st-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--1070eafa86-k8s-goldmane--cccfbd5cf--4f4st-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"e6dff6a9-2fcc-4331-a1f3-940a7614c300", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-1070eafa86", ContainerID:"9d1965fca67dd25290b973f19484bf3ff611edd0cc6a84f779356c7008a25306", Pod:"goldmane-cccfbd5cf-4f4st", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.33.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9e98e4f8129", MAC:"56:17:bb:fa:1e:30", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:16:46.496016 containerd[1718]: 2026-03-07 01:16:46.491 [INFO][4509] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9d1965fca67dd25290b973f19484bf3ff611edd0cc6a84f779356c7008a25306" Namespace="calico-system" Pod="goldmane-cccfbd5cf-4f4st" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-goldmane--cccfbd5cf--4f4st-eth0" Mar 7 01:16:46.552256 systemd[1]: Created slice kubepods-besteffort-pod68906e95_d44e_403d_8223_d7eeab65d83d.slice - libcontainer container kubepods-besteffort-pod68906e95_d44e_403d_8223_d7eeab65d83d.slice. Mar 7 01:16:46.578021 systemd[1]: Started cri-containerd-f570ae393034332eecdafec1d159de21d274d79339617e86544b66be74e3fab9.scope - libcontainer container f570ae393034332eecdafec1d159de21d274d79339617e86544b66be74e3fab9. Mar 7 01:16:46.586635 systemd-networkd[1367]: califdeb6aa21eb: Link UP Mar 7 01:16:46.587469 systemd-networkd[1367]: califdeb6aa21eb: Gained carrier Mar 7 01:16:46.605120 containerd[1718]: time="2026-03-07T01:16:46.601441354Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:16:46.606599 containerd[1718]: time="2026-03-07T01:16:46.605558821Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:16:46.606599 containerd[1718]: time="2026-03-07T01:16:46.605629822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:46.606599 containerd[1718]: time="2026-03-07T01:16:46.605759624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:46.609038 kubelet[3179]: I0307 01:16:46.608998 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/68906e95-d44e-403d-8223-d7eeab65d83d-whisker-backend-key-pair\") pod \"whisker-775dc59d49-2vc9x\" (UID: \"68906e95-d44e-403d-8223-d7eeab65d83d\") " pod="calico-system/whisker-775dc59d49-2vc9x" Mar 7 01:16:46.612922 kubelet[3179]: I0307 01:16:46.610014 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/68906e95-d44e-403d-8223-d7eeab65d83d-whisker-ca-bundle\") pod \"whisker-775dc59d49-2vc9x\" (UID: \"68906e95-d44e-403d-8223-d7eeab65d83d\") " pod="calico-system/whisker-775dc59d49-2vc9x" Mar 7 01:16:46.612922 kubelet[3179]: I0307 01:16:46.610397 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h4ds\" (UniqueName: \"kubernetes.io/projected/68906e95-d44e-403d-8223-d7eeab65d83d-kube-api-access-5h4ds\") pod \"whisker-775dc59d49-2vc9x\" (UID: \"68906e95-d44e-403d-8223-d7eeab65d83d\") " pod="calico-system/whisker-775dc59d49-2vc9x" Mar 7 01:16:46.612922 kubelet[3179]: I0307 01:16:46.610512 3179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/68906e95-d44e-403d-8223-d7eeab65d83d-nginx-config\") pod \"whisker-775dc59d49-2vc9x\" (UID: \"68906e95-d44e-403d-8223-d7eeab65d83d\") " pod="calico-system/whisker-775dc59d49-2vc9x" Mar 7 01:16:46.616648 containerd[1718]: 2026-03-07 01:16:46.026 [ERROR][4504] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:16:46.616648 containerd[1718]: 2026-03-07 01:16:46.082 [INFO][4504] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--fv9qs-eth0 coredns-66bc5c9577- kube-system 99a410ce-35f7-4fd4-9422-35a8b99f1549 911 0 2026-03-07 01:16:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-1070eafa86 coredns-66bc5c9577-fv9qs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califdeb6aa21eb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="e30af2e5786fc04a76282c9413e9bf2ac870aec5656852049f93954e8a8264fa" Namespace="kube-system" Pod="coredns-66bc5c9577-fv9qs" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--fv9qs-" Mar 7 01:16:46.616648 containerd[1718]: 2026-03-07 01:16:46.083 [INFO][4504] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e30af2e5786fc04a76282c9413e9bf2ac870aec5656852049f93954e8a8264fa" Namespace="kube-system" Pod="coredns-66bc5c9577-fv9qs" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--fv9qs-eth0" Mar 7 01:16:46.616648 containerd[1718]: 2026-03-07 01:16:46.229 [INFO][4604] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e30af2e5786fc04a76282c9413e9bf2ac870aec5656852049f93954e8a8264fa" HandleID="k8s-pod-network.e30af2e5786fc04a76282c9413e9bf2ac870aec5656852049f93954e8a8264fa" Workload="ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--fv9qs-eth0" Mar 7 01:16:46.616648 containerd[1718]: 2026-03-07 01:16:46.256 [INFO][4604] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e30af2e5786fc04a76282c9413e9bf2ac870aec5656852049f93954e8a8264fa" HandleID="k8s-pod-network.e30af2e5786fc04a76282c9413e9bf2ac870aec5656852049f93954e8a8264fa" Workload="ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--fv9qs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001edc50), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-1070eafa86", "pod":"coredns-66bc5c9577-fv9qs", "timestamp":"2026-03-07 01:16:46.229772071 +0000 UTC"}, Hostname:"ci-4081.3.6-n-1070eafa86", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000322420)} Mar 7 01:16:46.616648 containerd[1718]: 2026-03-07 01:16:46.256 [INFO][4604] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:16:46.616648 containerd[1718]: 2026-03-07 01:16:46.372 [INFO][4604] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:16:46.616648 containerd[1718]: 2026-03-07 01:16:46.375 [INFO][4604] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-1070eafa86' Mar 7 01:16:46.616648 containerd[1718]: 2026-03-07 01:16:46.384 [INFO][4604] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e30af2e5786fc04a76282c9413e9bf2ac870aec5656852049f93954e8a8264fa" host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:46.616648 containerd[1718]: 2026-03-07 01:16:46.417 [INFO][4604] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:46.616648 containerd[1718]: 2026-03-07 01:16:46.450 [INFO][4604] ipam/ipam.go 526: Trying affinity for 192.168.33.192/26 host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:46.616648 containerd[1718]: 2026-03-07 01:16:46.457 [INFO][4604] ipam/ipam.go 160: Attempting to load block cidr=192.168.33.192/26 host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:46.616648 containerd[1718]: 2026-03-07 01:16:46.486 [INFO][4604] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.33.192/26 host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:46.616648 containerd[1718]: 2026-03-07 01:16:46.487 [INFO][4604] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.33.192/26 handle="k8s-pod-network.e30af2e5786fc04a76282c9413e9bf2ac870aec5656852049f93954e8a8264fa" host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:46.616648 containerd[1718]: 2026-03-07 01:16:46.498 [INFO][4604] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e30af2e5786fc04a76282c9413e9bf2ac870aec5656852049f93954e8a8264fa Mar 7 01:16:46.616648 containerd[1718]: 2026-03-07 01:16:46.520 [INFO][4604] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.33.192/26 handle="k8s-pod-network.e30af2e5786fc04a76282c9413e9bf2ac870aec5656852049f93954e8a8264fa" host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:46.616648 containerd[1718]: 2026-03-07 01:16:46.559 [INFO][4604] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.33.196/26] block=192.168.33.192/26 handle="k8s-pod-network.e30af2e5786fc04a76282c9413e9bf2ac870aec5656852049f93954e8a8264fa" host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:46.616648 containerd[1718]: 2026-03-07 01:16:46.559 [INFO][4604] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.33.196/26] handle="k8s-pod-network.e30af2e5786fc04a76282c9413e9bf2ac870aec5656852049f93954e8a8264fa" host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:46.616648 containerd[1718]: 2026-03-07 01:16:46.565 [INFO][4604] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:16:46.616648 containerd[1718]: 2026-03-07 01:16:46.565 [INFO][4604] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.33.196/26] IPv6=[] ContainerID="e30af2e5786fc04a76282c9413e9bf2ac870aec5656852049f93954e8a8264fa" HandleID="k8s-pod-network.e30af2e5786fc04a76282c9413e9bf2ac870aec5656852049f93954e8a8264fa" Workload="ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--fv9qs-eth0" Mar 7 01:16:46.618064 containerd[1718]: 2026-03-07 01:16:46.575 [INFO][4504] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e30af2e5786fc04a76282c9413e9bf2ac870aec5656852049f93954e8a8264fa" Namespace="kube-system" Pod="coredns-66bc5c9577-fv9qs" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--fv9qs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--fv9qs-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"99a410ce-35f7-4fd4-9422-35a8b99f1549", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-1070eafa86", ContainerID:"", Pod:"coredns-66bc5c9577-fv9qs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califdeb6aa21eb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:16:46.618064 containerd[1718]: 2026-03-07 01:16:46.575 [INFO][4504] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.196/32] ContainerID="e30af2e5786fc04a76282c9413e9bf2ac870aec5656852049f93954e8a8264fa" Namespace="kube-system" Pod="coredns-66bc5c9577-fv9qs" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--fv9qs-eth0" Mar 7 01:16:46.618064 containerd[1718]: 2026-03-07 01:16:46.575 [INFO][4504] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califdeb6aa21eb ContainerID="e30af2e5786fc04a76282c9413e9bf2ac870aec5656852049f93954e8a8264fa" Namespace="kube-system" Pod="coredns-66bc5c9577-fv9qs" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--fv9qs-eth0" Mar 7 01:16:46.618064 containerd[1718]: 2026-03-07 01:16:46.588 [INFO][4504] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e30af2e5786fc04a76282c9413e9bf2ac870aec5656852049f93954e8a8264fa" Namespace="kube-system" Pod="coredns-66bc5c9577-fv9qs" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--fv9qs-eth0" Mar 7 01:16:46.618064 containerd[1718]: 2026-03-07 01:16:46.591 [INFO][4504] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e30af2e5786fc04a76282c9413e9bf2ac870aec5656852049f93954e8a8264fa" Namespace="kube-system" Pod="coredns-66bc5c9577-fv9qs" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--fv9qs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--fv9qs-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"99a410ce-35f7-4fd4-9422-35a8b99f1549", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-1070eafa86", ContainerID:"e30af2e5786fc04a76282c9413e9bf2ac870aec5656852049f93954e8a8264fa", Pod:"coredns-66bc5c9577-fv9qs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califdeb6aa21eb", MAC:"72:d1:d8:4c:45:25", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:16:46.619536 containerd[1718]: 2026-03-07 01:16:46.614 [INFO][4504] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e30af2e5786fc04a76282c9413e9bf2ac870aec5656852049f93954e8a8264fa" Namespace="kube-system" Pod="coredns-66bc5c9577-fv9qs" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--fv9qs-eth0" Mar 7 01:16:46.650692 systemd[1]: Started cri-containerd-9d1965fca67dd25290b973f19484bf3ff611edd0cc6a84f779356c7008a25306.scope - libcontainer container 9d1965fca67dd25290b973f19484bf3ff611edd0cc6a84f779356c7008a25306. Mar 7 01:16:46.699212 containerd[1718]: time="2026-03-07T01:16:46.699087826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:16:46.699392 containerd[1718]: time="2026-03-07T01:16:46.699180728Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:16:46.699392 containerd[1718]: time="2026-03-07T01:16:46.699215328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:46.700064 containerd[1718]: time="2026-03-07T01:16:46.700017741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:46.740554 systemd[1]: Started cri-containerd-e30af2e5786fc04a76282c9413e9bf2ac870aec5656852049f93954e8a8264fa.scope - libcontainer container e30af2e5786fc04a76282c9413e9bf2ac870aec5656852049f93954e8a8264fa. Mar 7 01:16:46.777789 containerd[1718]: time="2026-03-07T01:16:46.777743892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c79b8d867-qdz59,Uid:49b855bf-45ba-449a-a595-ffe6b81c323a,Namespace:calico-system,Attempt:1,} returns sandbox id \"6278c3a4e17a118387005cf275ecfe8cbbc17f4b7b915890797a553b19f653d0\"" Mar 7 01:16:46.782622 containerd[1718]: time="2026-03-07T01:16:46.782586770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 01:16:46.816230 containerd[1718]: time="2026-03-07T01:16:46.815017093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79dffb94bc-tv6jk,Uid:0fb06cd2-739c-4484-8717-b361c9a762dd,Namespace:calico-system,Attempt:1,} returns sandbox id \"f570ae393034332eecdafec1d159de21d274d79339617e86544b66be74e3fab9\"" Mar 7 01:16:46.857151 containerd[1718]: time="2026-03-07T01:16:46.857098670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-4f4st,Uid:e6dff6a9-2fcc-4331-a1f3-940a7614c300,Namespace:calico-system,Attempt:1,} returns sandbox id \"9d1965fca67dd25290b973f19484bf3ff611edd0cc6a84f779356c7008a25306\"" Mar 7 01:16:46.860526 containerd[1718]: time="2026-03-07T01:16:46.860152919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fv9qs,Uid:99a410ce-35f7-4fd4-9422-35a8b99f1549,Namespace:kube-system,Attempt:1,} returns sandbox id \"e30af2e5786fc04a76282c9413e9bf2ac870aec5656852049f93954e8a8264fa\"" Mar 7 01:16:46.877523 containerd[1718]: time="2026-03-07T01:16:46.876082576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-775dc59d49-2vc9x,Uid:68906e95-d44e-403d-8223-d7eeab65d83d,Namespace:calico-system,Attempt:0,}" Mar 7 01:16:46.883998 containerd[1718]: time="2026-03-07T01:16:46.883839600Z" level=info msg="CreateContainer within sandbox \"e30af2e5786fc04a76282c9413e9bf2ac870aec5656852049f93954e8a8264fa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:16:46.966181 containerd[1718]: time="2026-03-07T01:16:46.965438614Z" level=info msg="CreateContainer within sandbox \"e30af2e5786fc04a76282c9413e9bf2ac870aec5656852049f93954e8a8264fa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"020e4913fe17c35d8d0a917cbb894b8d3489f8702dbefbd863a1202233ca8b78\"" Mar 7 01:16:46.969638 containerd[1718]: time="2026-03-07T01:16:46.969487979Z" level=info msg="StartContainer for \"020e4913fe17c35d8d0a917cbb894b8d3489f8702dbefbd863a1202233ca8b78\"" Mar 7 01:16:47.030268 systemd[1]: Started cri-containerd-020e4913fe17c35d8d0a917cbb894b8d3489f8702dbefbd863a1202233ca8b78.scope - libcontainer container 020e4913fe17c35d8d0a917cbb894b8d3489f8702dbefbd863a1202233ca8b78. Mar 7 01:16:47.098404 containerd[1718]: time="2026-03-07T01:16:47.096788829Z" level=info msg="StartContainer for \"020e4913fe17c35d8d0a917cbb894b8d3489f8702dbefbd863a1202233ca8b78\" returns successfully" Mar 7 01:16:47.142291 kubelet[3179]: I0307 01:16:47.142082 3179 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07474d87-ed17-46c3-afc1-e14a04645e92" path="/var/lib/kubelet/pods/07474d87-ed17-46c3-afc1-e14a04645e92/volumes" Mar 7 01:16:47.191502 systemd-networkd[1367]: cali8bbbc10110f: Link UP Mar 7 01:16:47.191683 systemd-networkd[1367]: cali8bbbc10110f: Gained carrier Mar 7 01:16:47.225501 containerd[1718]: 2026-03-07 01:16:47.024 [ERROR][4866] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:16:47.225501 containerd[1718]: 2026-03-07 01:16:47.049 [INFO][4866] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--1070eafa86-k8s-whisker--775dc59d49--2vc9x-eth0 whisker-775dc59d49- calico-system 68906e95-d44e-403d-8223-d7eeab65d83d 940 0 2026-03-07 01:16:46 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:775dc59d49 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-1070eafa86 whisker-775dc59d49-2vc9x eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali8bbbc10110f [] [] }} ContainerID="4f85b5464c3f7c002d3980e90171acf64b4cc34e8440de60e05286775f6c7f9c" Namespace="calico-system" Pod="whisker-775dc59d49-2vc9x" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-whisker--775dc59d49--2vc9x-" Mar 7 01:16:47.225501 containerd[1718]: 2026-03-07 01:16:47.049 [INFO][4866] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4f85b5464c3f7c002d3980e90171acf64b4cc34e8440de60e05286775f6c7f9c" Namespace="calico-system" Pod="whisker-775dc59d49-2vc9x" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-whisker--775dc59d49--2vc9x-eth0" Mar 7 01:16:47.225501 containerd[1718]: 2026-03-07 01:16:47.114 [INFO][4902] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4f85b5464c3f7c002d3980e90171acf64b4cc34e8440de60e05286775f6c7f9c" HandleID="k8s-pod-network.4f85b5464c3f7c002d3980e90171acf64b4cc34e8440de60e05286775f6c7f9c" Workload="ci--4081.3.6--n--1070eafa86-k8s-whisker--775dc59d49--2vc9x-eth0" Mar 7 01:16:47.225501 containerd[1718]: 2026-03-07 01:16:47.125 [INFO][4902] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4f85b5464c3f7c002d3980e90171acf64b4cc34e8440de60e05286775f6c7f9c" HandleID="k8s-pod-network.4f85b5464c3f7c002d3980e90171acf64b4cc34e8440de60e05286775f6c7f9c" Workload="ci--4081.3.6--n--1070eafa86-k8s-whisker--775dc59d49--2vc9x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002774e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-1070eafa86", "pod":"whisker-775dc59d49-2vc9x", "timestamp":"2026-03-07 01:16:47.114799718 +0000 UTC"}, Hostname:"ci-4081.3.6-n-1070eafa86", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00038f340)} Mar 7 01:16:47.225501 containerd[1718]: 2026-03-07 01:16:47.125 [INFO][4902] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:16:47.225501 containerd[1718]: 2026-03-07 01:16:47.125 [INFO][4902] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:16:47.225501 containerd[1718]: 2026-03-07 01:16:47.125 [INFO][4902] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-1070eafa86' Mar 7 01:16:47.225501 containerd[1718]: 2026-03-07 01:16:47.129 [INFO][4902] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4f85b5464c3f7c002d3980e90171acf64b4cc34e8440de60e05286775f6c7f9c" host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:47.225501 containerd[1718]: 2026-03-07 01:16:47.141 [INFO][4902] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:47.225501 containerd[1718]: 2026-03-07 01:16:47.150 [INFO][4902] ipam/ipam.go 526: Trying affinity for 192.168.33.192/26 host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:47.225501 containerd[1718]: 2026-03-07 01:16:47.152 [INFO][4902] ipam/ipam.go 160: Attempting to load block cidr=192.168.33.192/26 host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:47.225501 containerd[1718]: 2026-03-07 01:16:47.155 [INFO][4902] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.33.192/26 host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:47.225501 containerd[1718]: 2026-03-07 01:16:47.155 [INFO][4902] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.33.192/26 handle="k8s-pod-network.4f85b5464c3f7c002d3980e90171acf64b4cc34e8440de60e05286775f6c7f9c" host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:47.225501 containerd[1718]: 2026-03-07 01:16:47.156 [INFO][4902] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4f85b5464c3f7c002d3980e90171acf64b4cc34e8440de60e05286775f6c7f9c Mar 7 01:16:47.225501 containerd[1718]: 2026-03-07 01:16:47.171 [INFO][4902] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.33.192/26 handle="k8s-pod-network.4f85b5464c3f7c002d3980e90171acf64b4cc34e8440de60e05286775f6c7f9c" host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:47.225501 containerd[1718]: 2026-03-07 01:16:47.184 [INFO][4902] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.33.197/26] block=192.168.33.192/26 handle="k8s-pod-network.4f85b5464c3f7c002d3980e90171acf64b4cc34e8440de60e05286775f6c7f9c" host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:47.225501 containerd[1718]: 2026-03-07 01:16:47.184 [INFO][4902] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.33.197/26] handle="k8s-pod-network.4f85b5464c3f7c002d3980e90171acf64b4cc34e8440de60e05286775f6c7f9c" host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:47.225501 containerd[1718]: 2026-03-07 01:16:47.184 [INFO][4902] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:16:47.225501 containerd[1718]: 2026-03-07 01:16:47.184 [INFO][4902] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.33.197/26] IPv6=[] ContainerID="4f85b5464c3f7c002d3980e90171acf64b4cc34e8440de60e05286775f6c7f9c" HandleID="k8s-pod-network.4f85b5464c3f7c002d3980e90171acf64b4cc34e8440de60e05286775f6c7f9c" Workload="ci--4081.3.6--n--1070eafa86-k8s-whisker--775dc59d49--2vc9x-eth0" Mar 7 01:16:47.226634 containerd[1718]: 2026-03-07 01:16:47.187 [INFO][4866] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4f85b5464c3f7c002d3980e90171acf64b4cc34e8440de60e05286775f6c7f9c" Namespace="calico-system" Pod="whisker-775dc59d49-2vc9x" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-whisker--775dc59d49--2vc9x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--1070eafa86-k8s-whisker--775dc59d49--2vc9x-eth0", GenerateName:"whisker-775dc59d49-", Namespace:"calico-system", SelfLink:"", UID:"68906e95-d44e-403d-8223-d7eeab65d83d", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"775dc59d49", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-1070eafa86", ContainerID:"", Pod:"whisker-775dc59d49-2vc9x", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.33.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8bbbc10110f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:16:47.226634 containerd[1718]: 2026-03-07 01:16:47.187 [INFO][4866] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.197/32] ContainerID="4f85b5464c3f7c002d3980e90171acf64b4cc34e8440de60e05286775f6c7f9c" Namespace="calico-system" Pod="whisker-775dc59d49-2vc9x" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-whisker--775dc59d49--2vc9x-eth0" Mar 7 01:16:47.226634 containerd[1718]: 2026-03-07 01:16:47.187 [INFO][4866] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8bbbc10110f ContainerID="4f85b5464c3f7c002d3980e90171acf64b4cc34e8440de60e05286775f6c7f9c" Namespace="calico-system" Pod="whisker-775dc59d49-2vc9x" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-whisker--775dc59d49--2vc9x-eth0" Mar 7 01:16:47.226634 containerd[1718]: 2026-03-07 01:16:47.192 [INFO][4866] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4f85b5464c3f7c002d3980e90171acf64b4cc34e8440de60e05286775f6c7f9c" Namespace="calico-system" Pod="whisker-775dc59d49-2vc9x" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-whisker--775dc59d49--2vc9x-eth0" Mar 7 01:16:47.226634 containerd[1718]: 2026-03-07 01:16:47.195 [INFO][4866] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4f85b5464c3f7c002d3980e90171acf64b4cc34e8440de60e05286775f6c7f9c" Namespace="calico-system" Pod="whisker-775dc59d49-2vc9x" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-whisker--775dc59d49--2vc9x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--1070eafa86-k8s-whisker--775dc59d49--2vc9x-eth0", GenerateName:"whisker-775dc59d49-", Namespace:"calico-system", SelfLink:"", UID:"68906e95-d44e-403d-8223-d7eeab65d83d", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"775dc59d49", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-1070eafa86", ContainerID:"4f85b5464c3f7c002d3980e90171acf64b4cc34e8440de60e05286775f6c7f9c", Pod:"whisker-775dc59d49-2vc9x", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.33.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8bbbc10110f", MAC:"e6:b4:74:ea:60:7c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:16:47.226634 containerd[1718]: 2026-03-07 01:16:47.222 [INFO][4866] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4f85b5464c3f7c002d3980e90171acf64b4cc34e8440de60e05286775f6c7f9c" Namespace="calico-system" Pod="whisker-775dc59d49-2vc9x" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-whisker--775dc59d49--2vc9x-eth0" Mar 7 01:16:47.268075 containerd[1718]: time="2026-03-07T01:16:47.267952684Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:16:47.268075 containerd[1718]: time="2026-03-07T01:16:47.268040085Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:16:47.269676 containerd[1718]: time="2026-03-07T01:16:47.268060986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:47.269676 containerd[1718]: time="2026-03-07T01:16:47.268210688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:47.302640 systemd[1]: Started cri-containerd-4f85b5464c3f7c002d3980e90171acf64b4cc34e8440de60e05286775f6c7f9c.scope - libcontainer container 4f85b5464c3f7c002d3980e90171acf64b4cc34e8440de60e05286775f6c7f9c. Mar 7 01:16:47.311316 systemd-networkd[1367]: calic508c6e5693: Gained IPv6LL Mar 7 01:16:47.407458 containerd[1718]: time="2026-03-07T01:16:47.407331728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-775dc59d49-2vc9x,Uid:68906e95-d44e-403d-8223-d7eeab65d83d,Namespace:calico-system,Attempt:0,} returns sandbox id \"4f85b5464c3f7c002d3980e90171acf64b4cc34e8440de60e05286775f6c7f9c\"" Mar 7 01:16:47.416532 kubelet[3179]: I0307 01:16:47.415540 3179 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-fv9qs" podStartSLOduration=44.415503159 podStartE2EDuration="44.415503159s" podCreationTimestamp="2026-03-07 01:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:16:47.385572277 +0000 UTC m=+52.350742310" watchObservedRunningTime="2026-03-07 01:16:47.415503159 +0000 UTC m=+52.380673292" Mar 7 01:16:47.476146 kernel: calico-node[4627]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 7 01:16:47.886349 systemd-networkd[1367]: califdeb6aa21eb: Gained IPv6LL Mar 7 01:16:47.887673 systemd-networkd[1367]: cali629816fcdd9: Gained IPv6LL Mar 7 01:16:48.014614 systemd-networkd[1367]: cali9e98e4f8129: Gained IPv6LL Mar 7 01:16:48.111496 systemd-networkd[1367]: vxlan.calico: Link UP Mar 7 01:16:48.111506 systemd-networkd[1367]: vxlan.calico: Gained carrier Mar 7 01:16:48.654402 systemd-networkd[1367]: cali8bbbc10110f: Gained IPv6LL Mar 7 01:16:49.934263 systemd-networkd[1367]: vxlan.calico: Gained IPv6LL Mar 7 01:16:50.539451 containerd[1718]: time="2026-03-07T01:16:50.539392203Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:50.542341 containerd[1718]: time="2026-03-07T01:16:50.542066745Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 7 01:16:50.545135 containerd[1718]: time="2026-03-07T01:16:50.544954990Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:50.554272 containerd[1718]: time="2026-03-07T01:16:50.553823528Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:50.555447 containerd[1718]: time="2026-03-07T01:16:50.555404453Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 3.77261668s" Mar 7 01:16:50.555574 containerd[1718]: time="2026-03-07T01:16:50.555549555Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 7 01:16:50.557210 containerd[1718]: time="2026-03-07T01:16:50.556551271Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 7 01:16:50.563449 containerd[1718]: time="2026-03-07T01:16:50.563417278Z" level=info msg="CreateContainer within sandbox \"6278c3a4e17a118387005cf275ecfe8cbbc17f4b7b915890797a553b19f653d0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 01:16:50.600860 containerd[1718]: time="2026-03-07T01:16:50.600813061Z" level=info msg="CreateContainer within sandbox \"6278c3a4e17a118387005cf275ecfe8cbbc17f4b7b915890797a553b19f653d0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"073329782cb189607970707757576f36e40f67129a8cc595ba84ac4540211190\"" Mar 7 01:16:50.601724 containerd[1718]: time="2026-03-07T01:16:50.601691974Z" level=info msg="StartContainer for \"073329782cb189607970707757576f36e40f67129a8cc595ba84ac4540211190\"" Mar 7 01:16:50.644283 systemd[1]: Started cri-containerd-073329782cb189607970707757576f36e40f67129a8cc595ba84ac4540211190.scope - libcontainer container 073329782cb189607970707757576f36e40f67129a8cc595ba84ac4540211190. Mar 7 01:16:50.690696 containerd[1718]: time="2026-03-07T01:16:50.690557460Z" level=info msg="StartContainer for \"073329782cb189607970707757576f36e40f67129a8cc595ba84ac4540211190\" returns successfully" Mar 7 01:16:52.393595 kubelet[3179]: I0307 01:16:52.393158 3179 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:16:53.845691 containerd[1718]: time="2026-03-07T01:16:53.845635635Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:53.849013 containerd[1718]: time="2026-03-07T01:16:53.848944687Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 7 01:16:53.853960 containerd[1718]: time="2026-03-07T01:16:53.853900964Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:53.859772 containerd[1718]: time="2026-03-07T01:16:53.859717555Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:53.860880 containerd[1718]: time="2026-03-07T01:16:53.860413166Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 3.303827594s" Mar 7 01:16:53.860880 containerd[1718]: time="2026-03-07T01:16:53.860452166Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 7 01:16:53.863517 containerd[1718]: time="2026-03-07T01:16:53.863321511Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 7 01:16:53.886952 containerd[1718]: time="2026-03-07T01:16:53.886906678Z" level=info msg="CreateContainer within sandbox \"f570ae393034332eecdafec1d159de21d274d79339617e86544b66be74e3fab9\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 7 01:16:53.923039 containerd[1718]: time="2026-03-07T01:16:53.922993141Z" level=info msg="CreateContainer within sandbox \"f570ae393034332eecdafec1d159de21d274d79339617e86544b66be74e3fab9\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"16a3e917b357ccbaad3282e519a674ad818fe1e8f9387eebbd2dc20081869bd3\"" Mar 7 01:16:53.923767 containerd[1718]: time="2026-03-07T01:16:53.923729952Z" level=info msg="StartContainer for \"16a3e917b357ccbaad3282e519a674ad818fe1e8f9387eebbd2dc20081869bd3\"" Mar 7 01:16:53.961278 systemd[1]: Started cri-containerd-16a3e917b357ccbaad3282e519a674ad818fe1e8f9387eebbd2dc20081869bd3.scope - libcontainer container 16a3e917b357ccbaad3282e519a674ad818fe1e8f9387eebbd2dc20081869bd3. Mar 7 01:16:54.008190 containerd[1718]: time="2026-03-07T01:16:54.007966465Z" level=info msg="StartContainer for \"16a3e917b357ccbaad3282e519a674ad818fe1e8f9387eebbd2dc20081869bd3\" returns successfully" Mar 7 01:16:54.419676 kubelet[3179]: I0307 01:16:54.417978 3179 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-5c79b8d867-qdz59" podStartSLOduration=35.642198626 podStartE2EDuration="39.417958856s" podCreationTimestamp="2026-03-07 01:16:15 +0000 UTC" firstStartedPulling="2026-03-07 01:16:46.780663139 +0000 UTC m=+51.745833172" lastFinishedPulling="2026-03-07 01:16:50.556423269 +0000 UTC m=+55.521593402" observedRunningTime="2026-03-07 01:16:51.410365379 +0000 UTC m=+56.375535412" watchObservedRunningTime="2026-03-07 01:16:54.417958856 +0000 UTC m=+59.383128889" Mar 7 01:16:54.419676 kubelet[3179]: I0307 01:16:54.418381 3179 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-79dffb94bc-tv6jk" podStartSLOduration=31.375922637 podStartE2EDuration="38.418366662s" podCreationTimestamp="2026-03-07 01:16:16 +0000 UTC" firstStartedPulling="2026-03-07 01:16:46.81922626 +0000 UTC m=+51.784396393" lastFinishedPulling="2026-03-07 01:16:53.861670385 +0000 UTC m=+58.826840418" observedRunningTime="2026-03-07 01:16:54.417254345 +0000 UTC m=+59.382424478" watchObservedRunningTime="2026-03-07 01:16:54.418366662 +0000 UTC m=+59.383536695" Mar 7 01:16:55.134188 containerd[1718]: time="2026-03-07T01:16:55.134141318Z" level=info msg="StopPodSandbox for \"f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04\"" Mar 7 01:16:55.148529 containerd[1718]: time="2026-03-07T01:16:55.147476726Z" level=info msg="StopPodSandbox for \"2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0\"" Mar 7 01:16:55.357666 containerd[1718]: 2026-03-07 01:16:55.266 [WARNING][5253] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-whisker--6c4858b6b8--t58mf-eth0" Mar 7 01:16:55.357666 containerd[1718]: 2026-03-07 01:16:55.268 [INFO][5253] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04" Mar 7 01:16:55.357666 containerd[1718]: 2026-03-07 01:16:55.268 [INFO][5253] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04" iface="eth0" netns="" Mar 7 01:16:55.357666 containerd[1718]: 2026-03-07 01:16:55.268 [INFO][5253] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04" Mar 7 01:16:55.357666 containerd[1718]: 2026-03-07 01:16:55.268 [INFO][5253] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04" Mar 7 01:16:55.357666 containerd[1718]: 2026-03-07 01:16:55.334 [INFO][5266] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04" HandleID="k8s-pod-network.f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04" Workload="ci--4081.3.6--n--1070eafa86-k8s-whisker--6c4858b6b8--t58mf-eth0" Mar 7 01:16:55.357666 containerd[1718]: 2026-03-07 01:16:55.335 [INFO][5266] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:16:55.357666 containerd[1718]: 2026-03-07 01:16:55.336 [INFO][5266] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:16:55.357666 containerd[1718]: 2026-03-07 01:16:55.347 [WARNING][5266] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04" HandleID="k8s-pod-network.f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04" Workload="ci--4081.3.6--n--1070eafa86-k8s-whisker--6c4858b6b8--t58mf-eth0" Mar 7 01:16:55.357666 containerd[1718]: 2026-03-07 01:16:55.347 [INFO][5266] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04" HandleID="k8s-pod-network.f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04" Workload="ci--4081.3.6--n--1070eafa86-k8s-whisker--6c4858b6b8--t58mf-eth0" Mar 7 01:16:55.357666 containerd[1718]: 2026-03-07 01:16:55.351 [INFO][5266] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:16:55.357666 containerd[1718]: 2026-03-07 01:16:55.354 [INFO][5253] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04" Mar 7 01:16:55.360688 containerd[1718]: time="2026-03-07T01:16:55.360651249Z" level=info msg="TearDown network for sandbox \"f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04\" successfully" Mar 7 01:16:55.360876 containerd[1718]: time="2026-03-07T01:16:55.360856852Z" level=info msg="StopPodSandbox for \"f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04\" returns successfully" Mar 7 01:16:55.364212 containerd[1718]: time="2026-03-07T01:16:55.364171303Z" level=info msg="RemovePodSandbox for \"f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04\"" Mar 7 01:16:55.364385 containerd[1718]: time="2026-03-07T01:16:55.364337406Z" level=info msg="Forcibly stopping sandbox \"f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04\"" Mar 7 01:16:55.399473 containerd[1718]: 2026-03-07 01:16:55.283 [INFO][5257] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0" Mar 7 01:16:55.399473 containerd[1718]: 2026-03-07 01:16:55.284 [INFO][5257] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0" iface="eth0" netns="/var/run/netns/cni-47490882-f691-3b59-cebe-afb0519e8163" Mar 7 01:16:55.399473 containerd[1718]: 2026-03-07 01:16:55.285 [INFO][5257] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0" iface="eth0" netns="/var/run/netns/cni-47490882-f691-3b59-cebe-afb0519e8163" Mar 7 01:16:55.399473 containerd[1718]: 2026-03-07 01:16:55.285 [INFO][5257] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0" iface="eth0" netns="/var/run/netns/cni-47490882-f691-3b59-cebe-afb0519e8163" Mar 7 01:16:55.399473 containerd[1718]: 2026-03-07 01:16:55.285 [INFO][5257] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0" Mar 7 01:16:55.399473 containerd[1718]: 2026-03-07 01:16:55.285 [INFO][5257] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0" Mar 7 01:16:55.399473 containerd[1718]: 2026-03-07 01:16:55.367 [INFO][5271] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0" HandleID="k8s-pod-network.2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0" Workload="ci--4081.3.6--n--1070eafa86-k8s-csi--node--driver--gkvkn-eth0" Mar 7 01:16:55.399473 containerd[1718]: 2026-03-07 01:16:55.367 [INFO][5271] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:16:55.399473 containerd[1718]: 2026-03-07 01:16:55.367 [INFO][5271] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:16:55.399473 containerd[1718]: 2026-03-07 01:16:55.379 [WARNING][5271] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0" HandleID="k8s-pod-network.2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0" Workload="ci--4081.3.6--n--1070eafa86-k8s-csi--node--driver--gkvkn-eth0" Mar 7 01:16:55.399473 containerd[1718]: 2026-03-07 01:16:55.380 [INFO][5271] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0" HandleID="k8s-pod-network.2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0" Workload="ci--4081.3.6--n--1070eafa86-k8s-csi--node--driver--gkvkn-eth0" Mar 7 01:16:55.399473 containerd[1718]: 2026-03-07 01:16:55.382 [INFO][5271] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:16:55.399473 containerd[1718]: 2026-03-07 01:16:55.390 [INFO][5257] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0" Mar 7 01:16:55.407299 systemd[1]: run-netns-cni\x2d47490882\x2df691\x2d3b59\x2dcebe\x2dafb0519e8163.mount: Deactivated successfully. Mar 7 01:16:55.409993 containerd[1718]: time="2026-03-07T01:16:55.407949786Z" level=info msg="TearDown network for sandbox \"2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0\" successfully" Mar 7 01:16:55.409993 containerd[1718]: time="2026-03-07T01:16:55.407995186Z" level=info msg="StopPodSandbox for \"2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0\" returns successfully" Mar 7 01:16:55.421469 containerd[1718]: time="2026-03-07T01:16:55.421223793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gkvkn,Uid:28e408df-d765-4f81-83a6-862639ee589c,Namespace:calico-system,Attempt:1,}" Mar 7 01:16:55.574160 containerd[1718]: 2026-03-07 01:16:55.446 [WARNING][5288] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-whisker--6c4858b6b8--t58mf-eth0" Mar 7 01:16:55.574160 containerd[1718]: 2026-03-07 01:16:55.447 [INFO][5288] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04" Mar 7 01:16:55.574160 containerd[1718]: 2026-03-07 01:16:55.447 [INFO][5288] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04" iface="eth0" netns="" Mar 7 01:16:55.574160 containerd[1718]: 2026-03-07 01:16:55.447 [INFO][5288] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04" Mar 7 01:16:55.574160 containerd[1718]: 2026-03-07 01:16:55.447 [INFO][5288] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04" Mar 7 01:16:55.574160 containerd[1718]: 2026-03-07 01:16:55.544 [INFO][5295] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04" HandleID="k8s-pod-network.f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04" Workload="ci--4081.3.6--n--1070eafa86-k8s-whisker--6c4858b6b8--t58mf-eth0" Mar 7 01:16:55.574160 containerd[1718]: 2026-03-07 01:16:55.544 [INFO][5295] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:16:55.574160 containerd[1718]: 2026-03-07 01:16:55.544 [INFO][5295] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:16:55.574160 containerd[1718]: 2026-03-07 01:16:55.562 [WARNING][5295] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04" HandleID="k8s-pod-network.f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04" Workload="ci--4081.3.6--n--1070eafa86-k8s-whisker--6c4858b6b8--t58mf-eth0" Mar 7 01:16:55.574160 containerd[1718]: 2026-03-07 01:16:55.562 [INFO][5295] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04" HandleID="k8s-pod-network.f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04" Workload="ci--4081.3.6--n--1070eafa86-k8s-whisker--6c4858b6b8--t58mf-eth0" Mar 7 01:16:55.574160 containerd[1718]: 2026-03-07 01:16:55.564 [INFO][5295] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:16:55.574160 containerd[1718]: 2026-03-07 01:16:55.569 [INFO][5288] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04" Mar 7 01:16:55.575041 containerd[1718]: time="2026-03-07T01:16:55.574853987Z" level=info msg="TearDown network for sandbox \"f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04\" successfully" Mar 7 01:16:55.663556 systemd-networkd[1367]: cali4c483475f02: Link UP Mar 7 01:16:55.665178 systemd-networkd[1367]: cali4c483475f02: Gained carrier Mar 7 01:16:55.685123 containerd[1718]: 2026-03-07 01:16:55.589 [INFO][5300] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--1070eafa86-k8s-csi--node--driver--gkvkn-eth0 csi-node-driver- calico-system 28e408df-d765-4f81-83a6-862639ee589c 997 0 2026-03-07 01:16:16 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-1070eafa86 csi-node-driver-gkvkn eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4c483475f02 [] [] }} ContainerID="f7e3eac38d6332f46caacee41e5a06c92454148a38e3098184376c25c0743ccf" Namespace="calico-system" Pod="csi-node-driver-gkvkn" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-csi--node--driver--gkvkn-" Mar 7 01:16:55.685123 containerd[1718]: 2026-03-07 01:16:55.589 [INFO][5300] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f7e3eac38d6332f46caacee41e5a06c92454148a38e3098184376c25c0743ccf" Namespace="calico-system" Pod="csi-node-driver-gkvkn" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-csi--node--driver--gkvkn-eth0" Mar 7 01:16:55.685123 containerd[1718]: 2026-03-07 01:16:55.617 [INFO][5315] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f7e3eac38d6332f46caacee41e5a06c92454148a38e3098184376c25c0743ccf" HandleID="k8s-pod-network.f7e3eac38d6332f46caacee41e5a06c92454148a38e3098184376c25c0743ccf" Workload="ci--4081.3.6--n--1070eafa86-k8s-csi--node--driver--gkvkn-eth0" Mar 7 01:16:55.685123 containerd[1718]: 2026-03-07 01:16:55.625 [INFO][5315] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f7e3eac38d6332f46caacee41e5a06c92454148a38e3098184376c25c0743ccf" HandleID="k8s-pod-network.f7e3eac38d6332f46caacee41e5a06c92454148a38e3098184376c25c0743ccf" Workload="ci--4081.3.6--n--1070eafa86-k8s-csi--node--driver--gkvkn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fb860), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-1070eafa86", "pod":"csi-node-driver-gkvkn", "timestamp":"2026-03-07 01:16:55.617504552 +0000 UTC"}, Hostname:"ci-4081.3.6-n-1070eafa86", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001882c0)} Mar 7 01:16:55.685123 containerd[1718]: 2026-03-07 01:16:55.625 [INFO][5315] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:16:55.685123 containerd[1718]: 2026-03-07 01:16:55.625 [INFO][5315] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:16:55.685123 containerd[1718]: 2026-03-07 01:16:55.625 [INFO][5315] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-1070eafa86' Mar 7 01:16:55.685123 containerd[1718]: 2026-03-07 01:16:55.627 [INFO][5315] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f7e3eac38d6332f46caacee41e5a06c92454148a38e3098184376c25c0743ccf" host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:55.685123 containerd[1718]: 2026-03-07 01:16:55.631 [INFO][5315] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:55.685123 containerd[1718]: 2026-03-07 01:16:55.635 [INFO][5315] ipam/ipam.go 526: Trying affinity for 192.168.33.192/26 host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:55.685123 containerd[1718]: 2026-03-07 01:16:55.637 [INFO][5315] ipam/ipam.go 160: Attempting to load block cidr=192.168.33.192/26 host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:55.685123 containerd[1718]: 2026-03-07 01:16:55.639 [INFO][5315] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.33.192/26 host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:55.685123 containerd[1718]: 2026-03-07 01:16:55.639 [INFO][5315] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.33.192/26 handle="k8s-pod-network.f7e3eac38d6332f46caacee41e5a06c92454148a38e3098184376c25c0743ccf" host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:55.685123 containerd[1718]: 2026-03-07 01:16:55.640 [INFO][5315] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f7e3eac38d6332f46caacee41e5a06c92454148a38e3098184376c25c0743ccf Mar 7 01:16:55.685123 containerd[1718]: 2026-03-07 01:16:55.645 [INFO][5315] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.33.192/26 handle="k8s-pod-network.f7e3eac38d6332f46caacee41e5a06c92454148a38e3098184376c25c0743ccf" host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:55.685123 containerd[1718]: 2026-03-07 01:16:55.656 [INFO][5315] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.33.198/26] block=192.168.33.192/26 handle="k8s-pod-network.f7e3eac38d6332f46caacee41e5a06c92454148a38e3098184376c25c0743ccf" host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:55.685123 containerd[1718]: 2026-03-07 01:16:55.657 [INFO][5315] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.33.198/26] handle="k8s-pod-network.f7e3eac38d6332f46caacee41e5a06c92454148a38e3098184376c25c0743ccf" host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:55.685123 containerd[1718]: 2026-03-07 01:16:55.657 [INFO][5315] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:16:55.685123 containerd[1718]: 2026-03-07 01:16:55.657 [INFO][5315] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.33.198/26] IPv6=[] ContainerID="f7e3eac38d6332f46caacee41e5a06c92454148a38e3098184376c25c0743ccf" HandleID="k8s-pod-network.f7e3eac38d6332f46caacee41e5a06c92454148a38e3098184376c25c0743ccf" Workload="ci--4081.3.6--n--1070eafa86-k8s-csi--node--driver--gkvkn-eth0" Mar 7 01:16:55.686045 containerd[1718]: 2026-03-07 01:16:55.659 [INFO][5300] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f7e3eac38d6332f46caacee41e5a06c92454148a38e3098184376c25c0743ccf" Namespace="calico-system" Pod="csi-node-driver-gkvkn" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-csi--node--driver--gkvkn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--1070eafa86-k8s-csi--node--driver--gkvkn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"28e408df-d765-4f81-83a6-862639ee589c", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-1070eafa86", ContainerID:"", Pod:"csi-node-driver-gkvkn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.33.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4c483475f02", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:16:55.686045 containerd[1718]: 2026-03-07 01:16:55.659 [INFO][5300] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.198/32] ContainerID="f7e3eac38d6332f46caacee41e5a06c92454148a38e3098184376c25c0743ccf" Namespace="calico-system" Pod="csi-node-driver-gkvkn" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-csi--node--driver--gkvkn-eth0" Mar 7 01:16:55.686045 containerd[1718]: 2026-03-07 01:16:55.659 [INFO][5300] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4c483475f02 ContainerID="f7e3eac38d6332f46caacee41e5a06c92454148a38e3098184376c25c0743ccf" Namespace="calico-system" Pod="csi-node-driver-gkvkn" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-csi--node--driver--gkvkn-eth0" Mar 7 01:16:55.686045 containerd[1718]: 2026-03-07 01:16:55.666 [INFO][5300] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f7e3eac38d6332f46caacee41e5a06c92454148a38e3098184376c25c0743ccf" Namespace="calico-system" Pod="csi-node-driver-gkvkn" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-csi--node--driver--gkvkn-eth0" Mar 7 01:16:55.686045 containerd[1718]: 2026-03-07 01:16:55.666 [INFO][5300] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f7e3eac38d6332f46caacee41e5a06c92454148a38e3098184376c25c0743ccf" Namespace="calico-system" Pod="csi-node-driver-gkvkn" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-csi--node--driver--gkvkn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--1070eafa86-k8s-csi--node--driver--gkvkn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"28e408df-d765-4f81-83a6-862639ee589c", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-1070eafa86", ContainerID:"f7e3eac38d6332f46caacee41e5a06c92454148a38e3098184376c25c0743ccf", Pod:"csi-node-driver-gkvkn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.33.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4c483475f02", MAC:"5a:3e:7f:d8:a7:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:16:55.686045 containerd[1718]: 2026-03-07 01:16:55.683 [INFO][5300] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f7e3eac38d6332f46caacee41e5a06c92454148a38e3098184376c25c0743ccf" Namespace="calico-system" Pod="csi-node-driver-gkvkn" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-csi--node--driver--gkvkn-eth0" Mar 7 01:16:56.064526 containerd[1718]: time="2026-03-07T01:16:56.064476663Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:16:56.064691 containerd[1718]: time="2026-03-07T01:16:56.064570765Z" level=info msg="RemovePodSandbox \"f54f28c1bb9ed0d286c8c871dbba37591e27588f91ab38789bf943de5df1cc04\" returns successfully" Mar 7 01:16:56.067754 containerd[1718]: time="2026-03-07T01:16:56.067041304Z" level=info msg="StopPodSandbox for \"64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807\"" Mar 7 01:16:56.073704 containerd[1718]: time="2026-03-07T01:16:56.072354088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:16:56.073704 containerd[1718]: time="2026-03-07T01:16:56.072420690Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:16:56.073704 containerd[1718]: time="2026-03-07T01:16:56.072443190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:56.073704 containerd[1718]: time="2026-03-07T01:16:56.072677094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:56.133862 systemd[1]: Started cri-containerd-f7e3eac38d6332f46caacee41e5a06c92454148a38e3098184376c25c0743ccf.scope - libcontainer container f7e3eac38d6332f46caacee41e5a06c92454148a38e3098184376c25c0743ccf. Mar 7 01:16:56.197873 containerd[1718]: time="2026-03-07T01:16:56.197626481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gkvkn,Uid:28e408df-d765-4f81-83a6-862639ee589c,Namespace:calico-system,Attempt:1,} returns sandbox id \"f7e3eac38d6332f46caacee41e5a06c92454148a38e3098184376c25c0743ccf\"" Mar 7 01:16:56.238716 containerd[1718]: 2026-03-07 01:16:56.161 [WARNING][5374] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--qdz59-eth0", GenerateName:"calico-apiserver-5c79b8d867-", Namespace:"calico-system", SelfLink:"", UID:"49b855bf-45ba-449a-a595-ffe6b81c323a", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c79b8d867", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-1070eafa86", ContainerID:"6278c3a4e17a118387005cf275ecfe8cbbc17f4b7b915890797a553b19f653d0", Pod:"calico-apiserver-5c79b8d867-qdz59", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calic508c6e5693", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:16:56.238716 containerd[1718]: 2026-03-07 01:16:56.162 [INFO][5374] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807" Mar 7 01:16:56.238716 containerd[1718]: 2026-03-07 01:16:56.162 [INFO][5374] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807" iface="eth0" netns="" Mar 7 01:16:56.238716 containerd[1718]: 2026-03-07 01:16:56.162 [INFO][5374] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807" Mar 7 01:16:56.238716 containerd[1718]: 2026-03-07 01:16:56.162 [INFO][5374] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807" Mar 7 01:16:56.238716 containerd[1718]: 2026-03-07 01:16:56.220 [INFO][5397] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807" HandleID="k8s-pod-network.64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807" Workload="ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--qdz59-eth0" Mar 7 01:16:56.238716 containerd[1718]: 2026-03-07 01:16:56.220 [INFO][5397] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:16:56.238716 containerd[1718]: 2026-03-07 01:16:56.221 [INFO][5397] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:16:56.238716 containerd[1718]: 2026-03-07 01:16:56.232 [WARNING][5397] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807" HandleID="k8s-pod-network.64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807" Workload="ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--qdz59-eth0" Mar 7 01:16:56.238716 containerd[1718]: 2026-03-07 01:16:56.232 [INFO][5397] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807" HandleID="k8s-pod-network.64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807" Workload="ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--qdz59-eth0" Mar 7 01:16:56.238716 containerd[1718]: 2026-03-07 01:16:56.234 [INFO][5397] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:16:56.238716 containerd[1718]: 2026-03-07 01:16:56.236 [INFO][5374] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807" Mar 7 01:16:56.240118 containerd[1718]: time="2026-03-07T01:16:56.238764035Z" level=info msg="TearDown network for sandbox \"64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807\" successfully" Mar 7 01:16:56.240118 containerd[1718]: time="2026-03-07T01:16:56.238853137Z" level=info msg="StopPodSandbox for \"64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807\" returns successfully" Mar 7 01:16:56.240118 containerd[1718]: time="2026-03-07T01:16:56.239740251Z" level=info msg="RemovePodSandbox for \"64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807\"" Mar 7 01:16:56.240118 containerd[1718]: time="2026-03-07T01:16:56.239776252Z" level=info msg="Forcibly stopping sandbox \"64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807\"" Mar 7 01:16:56.353866 containerd[1718]: 2026-03-07 01:16:56.295 [WARNING][5419] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--qdz59-eth0", GenerateName:"calico-apiserver-5c79b8d867-", Namespace:"calico-system", SelfLink:"", UID:"49b855bf-45ba-449a-a595-ffe6b81c323a", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c79b8d867", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-1070eafa86", ContainerID:"6278c3a4e17a118387005cf275ecfe8cbbc17f4b7b915890797a553b19f653d0", Pod:"calico-apiserver-5c79b8d867-qdz59", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calic508c6e5693", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:16:56.353866 containerd[1718]: 2026-03-07 01:16:56.295 [INFO][5419] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807" Mar 7 01:16:56.353866 containerd[1718]: 2026-03-07 01:16:56.295 [INFO][5419] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807" iface="eth0" netns="" Mar 7 01:16:56.353866 containerd[1718]: 2026-03-07 01:16:56.295 [INFO][5419] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807" Mar 7 01:16:56.353866 containerd[1718]: 2026-03-07 01:16:56.295 [INFO][5419] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807" Mar 7 01:16:56.353866 containerd[1718]: 2026-03-07 01:16:56.336 [INFO][5426] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807" HandleID="k8s-pod-network.64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807" Workload="ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--qdz59-eth0" Mar 7 01:16:56.353866 containerd[1718]: 2026-03-07 01:16:56.336 [INFO][5426] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:16:56.353866 containerd[1718]: 2026-03-07 01:16:56.336 [INFO][5426] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:16:56.353866 containerd[1718]: 2026-03-07 01:16:56.346 [WARNING][5426] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807" HandleID="k8s-pod-network.64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807" Workload="ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--qdz59-eth0" Mar 7 01:16:56.353866 containerd[1718]: 2026-03-07 01:16:56.346 [INFO][5426] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807" HandleID="k8s-pod-network.64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807" Workload="ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--qdz59-eth0" Mar 7 01:16:56.353866 containerd[1718]: 2026-03-07 01:16:56.348 [INFO][5426] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:16:56.353866 containerd[1718]: 2026-03-07 01:16:56.349 [INFO][5419] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807" Mar 7 01:16:56.355840 containerd[1718]: time="2026-03-07T01:16:56.354672079Z" level=info msg="TearDown network for sandbox \"64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807\" successfully" Mar 7 01:16:56.365875 containerd[1718]: time="2026-03-07T01:16:56.365827057Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:16:56.366033 containerd[1718]: time="2026-03-07T01:16:56.365899658Z" level=info msg="RemovePodSandbox \"64429c12831c61cc8ed3558290a0c98b2b828859170a7a20b8fc176174c51807\" returns successfully" Mar 7 01:16:56.366893 containerd[1718]: time="2026-03-07T01:16:56.366593269Z" level=info msg="StopPodSandbox for \"3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457\"" Mar 7 01:16:56.494099 containerd[1718]: 2026-03-07 01:16:56.436 [WARNING][5440] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--1070eafa86-k8s-goldmane--cccfbd5cf--4f4st-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"e6dff6a9-2fcc-4331-a1f3-940a7614c300", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-1070eafa86", ContainerID:"9d1965fca67dd25290b973f19484bf3ff611edd0cc6a84f779356c7008a25306", Pod:"goldmane-cccfbd5cf-4f4st", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.33.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9e98e4f8129", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:16:56.494099 containerd[1718]: 2026-03-07 01:16:56.436 [INFO][5440] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457" Mar 7 01:16:56.494099 containerd[1718]: 2026-03-07 01:16:56.437 [INFO][5440] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457" iface="eth0" netns="" Mar 7 01:16:56.494099 containerd[1718]: 2026-03-07 01:16:56.437 [INFO][5440] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457" Mar 7 01:16:56.494099 containerd[1718]: 2026-03-07 01:16:56.437 [INFO][5440] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457" Mar 7 01:16:56.494099 containerd[1718]: 2026-03-07 01:16:56.477 [INFO][5447] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457" HandleID="k8s-pod-network.3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457" Workload="ci--4081.3.6--n--1070eafa86-k8s-goldmane--cccfbd5cf--4f4st-eth0" Mar 7 01:16:56.494099 containerd[1718]: 2026-03-07 01:16:56.477 [INFO][5447] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:16:56.494099 containerd[1718]: 2026-03-07 01:16:56.477 [INFO][5447] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:16:56.494099 containerd[1718]: 2026-03-07 01:16:56.487 [WARNING][5447] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457" HandleID="k8s-pod-network.3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457" Workload="ci--4081.3.6--n--1070eafa86-k8s-goldmane--cccfbd5cf--4f4st-eth0" Mar 7 01:16:56.494099 containerd[1718]: 2026-03-07 01:16:56.487 [INFO][5447] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457" HandleID="k8s-pod-network.3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457" Workload="ci--4081.3.6--n--1070eafa86-k8s-goldmane--cccfbd5cf--4f4st-eth0" Mar 7 01:16:56.494099 containerd[1718]: 2026-03-07 01:16:56.489 [INFO][5447] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:16:56.494099 containerd[1718]: 2026-03-07 01:16:56.491 [INFO][5440] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457" Mar 7 01:16:56.494926 containerd[1718]: time="2026-03-07T01:16:56.494169598Z" level=info msg="TearDown network for sandbox \"3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457\" successfully" Mar 7 01:16:56.494926 containerd[1718]: time="2026-03-07T01:16:56.494198498Z" level=info msg="StopPodSandbox for \"3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457\" returns successfully" Mar 7 01:16:56.495599 containerd[1718]: time="2026-03-07T01:16:56.495563520Z" level=info msg="RemovePodSandbox for \"3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457\"" Mar 7 01:16:56.495708 containerd[1718]: time="2026-03-07T01:16:56.495606321Z" level=info msg="Forcibly stopping sandbox \"3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457\"" Mar 7 01:16:56.626959 containerd[1718]: 2026-03-07 01:16:56.547 [WARNING][5461] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--1070eafa86-k8s-goldmane--cccfbd5cf--4f4st-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"e6dff6a9-2fcc-4331-a1f3-940a7614c300", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-1070eafa86", ContainerID:"9d1965fca67dd25290b973f19484bf3ff611edd0cc6a84f779356c7008a25306", Pod:"goldmane-cccfbd5cf-4f4st", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.33.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9e98e4f8129", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:16:56.626959 containerd[1718]: 2026-03-07 01:16:56.550 [INFO][5461] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457" Mar 7 01:16:56.626959 containerd[1718]: 2026-03-07 01:16:56.550 [INFO][5461] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457" iface="eth0" netns="" Mar 7 01:16:56.626959 containerd[1718]: 2026-03-07 01:16:56.551 [INFO][5461] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457" Mar 7 01:16:56.626959 containerd[1718]: 2026-03-07 01:16:56.551 [INFO][5461] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457" Mar 7 01:16:56.626959 containerd[1718]: 2026-03-07 01:16:56.607 [INFO][5469] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457" HandleID="k8s-pod-network.3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457" Workload="ci--4081.3.6--n--1070eafa86-k8s-goldmane--cccfbd5cf--4f4st-eth0" Mar 7 01:16:56.626959 containerd[1718]: 2026-03-07 01:16:56.607 [INFO][5469] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:16:56.626959 containerd[1718]: 2026-03-07 01:16:56.607 [INFO][5469] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:16:56.626959 containerd[1718]: 2026-03-07 01:16:56.619 [WARNING][5469] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457" HandleID="k8s-pod-network.3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457" Workload="ci--4081.3.6--n--1070eafa86-k8s-goldmane--cccfbd5cf--4f4st-eth0" Mar 7 01:16:56.626959 containerd[1718]: 2026-03-07 01:16:56.619 [INFO][5469] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457" HandleID="k8s-pod-network.3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457" Workload="ci--4081.3.6--n--1070eafa86-k8s-goldmane--cccfbd5cf--4f4st-eth0" Mar 7 01:16:56.626959 containerd[1718]: 2026-03-07 01:16:56.621 [INFO][5469] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:16:56.626959 containerd[1718]: 2026-03-07 01:16:56.623 [INFO][5461] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457" Mar 7 01:16:56.626959 containerd[1718]: time="2026-03-07T01:16:56.626920410Z" level=info msg="TearDown network for sandbox \"3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457\" successfully" Mar 7 01:16:56.644859 containerd[1718]: time="2026-03-07T01:16:56.644812294Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:16:56.645006 containerd[1718]: time="2026-03-07T01:16:56.644884895Z" level=info msg="RemovePodSandbox \"3f328993792bd17ca28a25fd9029e6bbafb0f4dbdcccb6f1f56736f5a0707457\" returns successfully" Mar 7 01:16:56.647764 containerd[1718]: time="2026-03-07T01:16:56.647044830Z" level=info msg="StopPodSandbox for \"da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca\"" Mar 7 01:16:56.762232 containerd[1718]: 2026-03-07 01:16:56.706 [WARNING][5484] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--1070eafa86-k8s-calico--kube--controllers--79dffb94bc--tv6jk-eth0", GenerateName:"calico-kube-controllers-79dffb94bc-", Namespace:"calico-system", SelfLink:"", UID:"0fb06cd2-739c-4484-8717-b361c9a762dd", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79dffb94bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-1070eafa86", ContainerID:"f570ae393034332eecdafec1d159de21d274d79339617e86544b66be74e3fab9", Pod:"calico-kube-controllers-79dffb94bc-tv6jk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.33.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali629816fcdd9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:16:56.762232 containerd[1718]: 2026-03-07 01:16:56.706 [INFO][5484] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca" Mar 7 01:16:56.762232 containerd[1718]: 2026-03-07 01:16:56.706 [INFO][5484] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca" iface="eth0" netns="" Mar 7 01:16:56.762232 containerd[1718]: 2026-03-07 01:16:56.706 [INFO][5484] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca" Mar 7 01:16:56.762232 containerd[1718]: 2026-03-07 01:16:56.706 [INFO][5484] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca" Mar 7 01:16:56.762232 containerd[1718]: 2026-03-07 01:16:56.745 [INFO][5491] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca" HandleID="k8s-pod-network.da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca" Workload="ci--4081.3.6--n--1070eafa86-k8s-calico--kube--controllers--79dffb94bc--tv6jk-eth0" Mar 7 01:16:56.762232 containerd[1718]: 2026-03-07 01:16:56.745 [INFO][5491] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:16:56.762232 containerd[1718]: 2026-03-07 01:16:56.746 [INFO][5491] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:16:56.762232 containerd[1718]: 2026-03-07 01:16:56.755 [WARNING][5491] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca" HandleID="k8s-pod-network.da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca" Workload="ci--4081.3.6--n--1070eafa86-k8s-calico--kube--controllers--79dffb94bc--tv6jk-eth0" Mar 7 01:16:56.762232 containerd[1718]: 2026-03-07 01:16:56.755 [INFO][5491] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca" HandleID="k8s-pod-network.da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca" Workload="ci--4081.3.6--n--1070eafa86-k8s-calico--kube--controllers--79dffb94bc--tv6jk-eth0" Mar 7 01:16:56.762232 containerd[1718]: 2026-03-07 01:16:56.757 [INFO][5491] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:16:56.762232 containerd[1718]: 2026-03-07 01:16:56.759 [INFO][5484] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca" Mar 7 01:16:56.763048 containerd[1718]: time="2026-03-07T01:16:56.762275763Z" level=info msg="TearDown network for sandbox \"da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca\" successfully" Mar 7 01:16:56.763048 containerd[1718]: time="2026-03-07T01:16:56.762303163Z" level=info msg="StopPodSandbox for \"da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca\" returns successfully" Mar 7 01:16:56.763490 containerd[1718]: time="2026-03-07T01:16:56.763462981Z" level=info msg="RemovePodSandbox for \"da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca\"" Mar 7 01:16:56.763579 containerd[1718]: time="2026-03-07T01:16:56.763497182Z" level=info msg="Forcibly stopping sandbox \"da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca\"" Mar 7 01:16:56.879363 containerd[1718]: 2026-03-07 01:16:56.821 [WARNING][5505] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--1070eafa86-k8s-calico--kube--controllers--79dffb94bc--tv6jk-eth0", GenerateName:"calico-kube-controllers-79dffb94bc-", Namespace:"calico-system", SelfLink:"", UID:"0fb06cd2-739c-4484-8717-b361c9a762dd", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79dffb94bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-1070eafa86", ContainerID:"f570ae393034332eecdafec1d159de21d274d79339617e86544b66be74e3fab9", Pod:"calico-kube-controllers-79dffb94bc-tv6jk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.33.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali629816fcdd9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:16:56.879363 containerd[1718]: 2026-03-07 01:16:56.821 [INFO][5505] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca" Mar 7 01:16:56.879363 containerd[1718]: 2026-03-07 01:16:56.821 [INFO][5505] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca" iface="eth0" netns="" Mar 7 01:16:56.879363 containerd[1718]: 2026-03-07 01:16:56.821 [INFO][5505] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca" Mar 7 01:16:56.879363 containerd[1718]: 2026-03-07 01:16:56.821 [INFO][5505] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca" Mar 7 01:16:56.879363 containerd[1718]: 2026-03-07 01:16:56.859 [INFO][5513] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca" HandleID="k8s-pod-network.da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca" Workload="ci--4081.3.6--n--1070eafa86-k8s-calico--kube--controllers--79dffb94bc--tv6jk-eth0" Mar 7 01:16:56.879363 containerd[1718]: 2026-03-07 01:16:56.859 [INFO][5513] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:16:56.879363 containerd[1718]: 2026-03-07 01:16:56.859 [INFO][5513] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:16:56.879363 containerd[1718]: 2026-03-07 01:16:56.869 [WARNING][5513] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca" HandleID="k8s-pod-network.da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca" Workload="ci--4081.3.6--n--1070eafa86-k8s-calico--kube--controllers--79dffb94bc--tv6jk-eth0" Mar 7 01:16:56.879363 containerd[1718]: 2026-03-07 01:16:56.869 [INFO][5513] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca" HandleID="k8s-pod-network.da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca" Workload="ci--4081.3.6--n--1070eafa86-k8s-calico--kube--controllers--79dffb94bc--tv6jk-eth0" Mar 7 01:16:56.879363 containerd[1718]: 2026-03-07 01:16:56.872 [INFO][5513] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:16:56.879363 containerd[1718]: 2026-03-07 01:16:56.875 [INFO][5505] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca" Mar 7 01:16:56.880026 containerd[1718]: time="2026-03-07T01:16:56.879350425Z" level=info msg="TearDown network for sandbox \"da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca\" successfully" Mar 7 01:16:56.890918 containerd[1718]: time="2026-03-07T01:16:56.890596104Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:16:56.890918 containerd[1718]: time="2026-03-07T01:16:56.890675105Z" level=info msg="RemovePodSandbox \"da0362cf836ca95ecea6f60288f5ef842bd24bd3a1bcfeff2a7fa71676550eca\" returns successfully" Mar 7 01:16:56.892094 containerd[1718]: time="2026-03-07T01:16:56.892012226Z" level=info msg="StopPodSandbox for \"3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361\"" Mar 7 01:16:57.025479 containerd[1718]: 2026-03-07 01:16:56.967 [WARNING][5527] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--fv9qs-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"99a410ce-35f7-4fd4-9422-35a8b99f1549", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-1070eafa86", ContainerID:"e30af2e5786fc04a76282c9413e9bf2ac870aec5656852049f93954e8a8264fa", Pod:"coredns-66bc5c9577-fv9qs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califdeb6aa21eb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:16:57.025479 containerd[1718]: 2026-03-07 01:16:56.968 [INFO][5527] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361" Mar 7 01:16:57.025479 containerd[1718]: 2026-03-07 01:16:56.969 [INFO][5527] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361" iface="eth0" netns="" Mar 7 01:16:57.025479 containerd[1718]: 2026-03-07 01:16:56.969 [INFO][5527] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361" Mar 7 01:16:57.025479 containerd[1718]: 2026-03-07 01:16:56.969 [INFO][5527] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361" Mar 7 01:16:57.025479 containerd[1718]: 2026-03-07 01:16:57.009 [INFO][5534] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361" HandleID="k8s-pod-network.3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361" Workload="ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--fv9qs-eth0" Mar 7 01:16:57.025479 containerd[1718]: 2026-03-07 01:16:57.010 [INFO][5534] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:16:57.025479 containerd[1718]: 2026-03-07 01:16:57.010 [INFO][5534] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:16:57.025479 containerd[1718]: 2026-03-07 01:16:57.019 [WARNING][5534] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361" HandleID="k8s-pod-network.3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361" Workload="ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--fv9qs-eth0" Mar 7 01:16:57.025479 containerd[1718]: 2026-03-07 01:16:57.019 [INFO][5534] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361" HandleID="k8s-pod-network.3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361" Workload="ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--fv9qs-eth0" Mar 7 01:16:57.025479 containerd[1718]: 2026-03-07 01:16:57.021 [INFO][5534] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:16:57.025479 containerd[1718]: 2026-03-07 01:16:57.023 [INFO][5527] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361" Mar 7 01:16:57.026518 containerd[1718]: time="2026-03-07T01:16:57.025517450Z" level=info msg="TearDown network for sandbox \"3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361\" successfully" Mar 7 01:16:57.026518 containerd[1718]: time="2026-03-07T01:16:57.025547250Z" level=info msg="StopPodSandbox for \"3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361\" returns successfully" Mar 7 01:16:57.027692 containerd[1718]: time="2026-03-07T01:16:57.027182476Z" level=info msg="RemovePodSandbox for \"3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361\"" Mar 7 01:16:57.027692 containerd[1718]: time="2026-03-07T01:16:57.027270978Z" level=info msg="Forcibly stopping sandbox \"3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361\"" Mar 7 01:16:57.067322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2704981157.mount: Deactivated successfully. Mar 7 01:16:57.137019 containerd[1718]: time="2026-03-07T01:16:57.136842421Z" level=info msg="StopPodSandbox for \"4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae\"" Mar 7 01:16:57.137642 containerd[1718]: time="2026-03-07T01:16:57.137567932Z" level=info msg="StopPodSandbox for \"21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b\"" Mar 7 01:16:57.150044 containerd[1718]: 2026-03-07 01:16:57.080 [WARNING][5548] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--fv9qs-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"99a410ce-35f7-4fd4-9422-35a8b99f1549", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-1070eafa86", ContainerID:"e30af2e5786fc04a76282c9413e9bf2ac870aec5656852049f93954e8a8264fa", Pod:"coredns-66bc5c9577-fv9qs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califdeb6aa21eb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:16:57.150044 containerd[1718]: 2026-03-07 01:16:57.082 [INFO][5548] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361" Mar 7 01:16:57.150044 containerd[1718]: 2026-03-07 01:16:57.082 [INFO][5548] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361" iface="eth0" netns="" Mar 7 01:16:57.150044 containerd[1718]: 2026-03-07 01:16:57.082 [INFO][5548] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361" Mar 7 01:16:57.150044 containerd[1718]: 2026-03-07 01:16:57.082 [INFO][5548] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361" Mar 7 01:16:57.150044 containerd[1718]: 2026-03-07 01:16:57.122 [INFO][5556] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361" HandleID="k8s-pod-network.3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361" Workload="ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--fv9qs-eth0" Mar 7 01:16:57.150044 containerd[1718]: 2026-03-07 01:16:57.122 [INFO][5556] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:16:57.150044 containerd[1718]: 2026-03-07 01:16:57.123 [INFO][5556] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:16:57.150044 containerd[1718]: 2026-03-07 01:16:57.133 [WARNING][5556] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361" HandleID="k8s-pod-network.3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361" Workload="ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--fv9qs-eth0" Mar 7 01:16:57.150044 containerd[1718]: 2026-03-07 01:16:57.134 [INFO][5556] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361" HandleID="k8s-pod-network.3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361" Workload="ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--fv9qs-eth0" Mar 7 01:16:57.150044 containerd[1718]: 2026-03-07 01:16:57.138 [INFO][5556] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:16:57.150044 containerd[1718]: 2026-03-07 01:16:57.146 [INFO][5548] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361" Mar 7 01:16:57.150044 containerd[1718]: time="2026-03-07T01:16:57.149991330Z" level=info msg="TearDown network for sandbox \"3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361\" successfully" Mar 7 01:16:57.162342 containerd[1718]: time="2026-03-07T01:16:57.161535913Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:16:57.162342 containerd[1718]: time="2026-03-07T01:16:57.161621815Z" level=info msg="RemovePodSandbox \"3078f7c1c954335a20d53b4390f687a7ba812a7fcfc6f41d138520a3eba15361\" returns successfully" Mar 7 01:16:57.294433 systemd-networkd[1367]: cali4c483475f02: Gained IPv6LL Mar 7 01:16:57.421087 containerd[1718]: 2026-03-07 01:16:57.236 [INFO][5588] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae" Mar 7 01:16:57.421087 containerd[1718]: 2026-03-07 01:16:57.237 [INFO][5588] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae" iface="eth0" netns="/var/run/netns/cni-0357ef1a-2fce-5e87-ebdc-68e87de072e9" Mar 7 01:16:57.421087 containerd[1718]: 2026-03-07 01:16:57.237 [INFO][5588] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae" iface="eth0" netns="/var/run/netns/cni-0357ef1a-2fce-5e87-ebdc-68e87de072e9" Mar 7 01:16:57.421087 containerd[1718]: 2026-03-07 01:16:57.238 [INFO][5588] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae" iface="eth0" netns="/var/run/netns/cni-0357ef1a-2fce-5e87-ebdc-68e87de072e9" Mar 7 01:16:57.421087 containerd[1718]: 2026-03-07 01:16:57.239 [INFO][5588] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae" Mar 7 01:16:57.421087 containerd[1718]: 2026-03-07 01:16:57.239 [INFO][5588] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae" Mar 7 01:16:57.421087 containerd[1718]: 2026-03-07 01:16:57.365 [INFO][5597] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae" HandleID="k8s-pod-network.4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae" Workload="ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--8rgcl-eth0" Mar 7 01:16:57.421087 containerd[1718]: 2026-03-07 01:16:57.368 [INFO][5597] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:16:57.421087 containerd[1718]: 2026-03-07 01:16:57.368 [INFO][5597] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:16:57.421087 containerd[1718]: 2026-03-07 01:16:57.400 [WARNING][5597] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae" HandleID="k8s-pod-network.4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae" Workload="ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--8rgcl-eth0" Mar 7 01:16:57.421087 containerd[1718]: 2026-03-07 01:16:57.400 [INFO][5597] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae" HandleID="k8s-pod-network.4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae" Workload="ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--8rgcl-eth0" Mar 7 01:16:57.421087 containerd[1718]: 2026-03-07 01:16:57.403 [INFO][5597] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:16:57.421087 containerd[1718]: 2026-03-07 01:16:57.409 [INFO][5588] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae" Mar 7 01:16:57.421087 containerd[1718]: time="2026-03-07T01:16:57.414457336Z" level=info msg="TearDown network for sandbox \"4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae\" successfully" Mar 7 01:16:57.421087 containerd[1718]: time="2026-03-07T01:16:57.414490737Z" level=info msg="StopPodSandbox for \"4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae\" returns successfully" Mar 7 01:16:57.425788 systemd[1]: run-netns-cni\x2d0357ef1a\x2d2fce\x2d5e87\x2debdc\x2d68e87de072e9.mount: Deactivated successfully. Mar 7 01:16:57.429954 containerd[1718]: time="2026-03-07T01:16:57.429907082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8rgcl,Uid:a2c57fb9-f299-436c-8599-34cdb3343b9a,Namespace:kube-system,Attempt:1,}" Mar 7 01:16:57.469129 containerd[1718]: 2026-03-07 01:16:57.273 [INFO][5581] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b" Mar 7 01:16:57.469129 containerd[1718]: 2026-03-07 01:16:57.274 [INFO][5581] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b" iface="eth0" netns="/var/run/netns/cni-ca00bf60-5980-1f9f-e7ba-198e16c10636" Mar 7 01:16:57.469129 containerd[1718]: 2026-03-07 01:16:57.274 [INFO][5581] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b" iface="eth0" netns="/var/run/netns/cni-ca00bf60-5980-1f9f-e7ba-198e16c10636" Mar 7 01:16:57.469129 containerd[1718]: 2026-03-07 01:16:57.276 [INFO][5581] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b" iface="eth0" netns="/var/run/netns/cni-ca00bf60-5980-1f9f-e7ba-198e16c10636" Mar 7 01:16:57.469129 containerd[1718]: 2026-03-07 01:16:57.276 [INFO][5581] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b" Mar 7 01:16:57.469129 containerd[1718]: 2026-03-07 01:16:57.276 [INFO][5581] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b" Mar 7 01:16:57.469129 containerd[1718]: 2026-03-07 01:16:57.418 [INFO][5602] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b" HandleID="k8s-pod-network.21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b" Workload="ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--vk5ds-eth0" Mar 7 01:16:57.469129 containerd[1718]: 2026-03-07 01:16:57.432 [INFO][5602] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:16:57.469129 containerd[1718]: 2026-03-07 01:16:57.432 [INFO][5602] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:16:57.469129 containerd[1718]: 2026-03-07 01:16:57.453 [WARNING][5602] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b" HandleID="k8s-pod-network.21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b" Workload="ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--vk5ds-eth0" Mar 7 01:16:57.469129 containerd[1718]: 2026-03-07 01:16:57.453 [INFO][5602] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b" HandleID="k8s-pod-network.21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b" Workload="ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--vk5ds-eth0" Mar 7 01:16:57.469129 containerd[1718]: 2026-03-07 01:16:57.457 [INFO][5602] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:16:57.469129 containerd[1718]: 2026-03-07 01:16:57.462 [INFO][5581] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b" Mar 7 01:16:57.469697 containerd[1718]: time="2026-03-07T01:16:57.469377210Z" level=info msg="TearDown network for sandbox \"21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b\" successfully" Mar 7 01:16:57.469697 containerd[1718]: time="2026-03-07T01:16:57.469406110Z" level=info msg="StopPodSandbox for \"21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b\" returns successfully" Mar 7 01:16:57.481432 systemd[1]: run-netns-cni\x2dca00bf60\x2d5980\x2d1f9f\x2de7ba\x2d198e16c10636.mount: Deactivated successfully. Mar 7 01:16:57.486334 containerd[1718]: time="2026-03-07T01:16:57.486290879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c79b8d867-vk5ds,Uid:7e3699ed-bb54-40b3-937a-980b33b5dfb7,Namespace:calico-system,Attempt:1,}" Mar 7 01:16:57.742587 systemd-networkd[1367]: calibb4a1f0d3ff: Link UP Mar 7 01:16:57.745939 systemd-networkd[1367]: calibb4a1f0d3ff: Gained carrier Mar 7 01:16:57.786501 containerd[1718]: 2026-03-07 01:16:57.594 [INFO][5611] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--8rgcl-eth0 coredns-66bc5c9577- kube-system a2c57fb9-f299-436c-8599-34cdb3343b9a 1009 0 2026-03-07 01:16:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-1070eafa86 coredns-66bc5c9577-8rgcl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibb4a1f0d3ff [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="59704b3f1b9c55764d4d77f606efd28b1b90128a029b1af85e86a4d8d5fee316" Namespace="kube-system" Pod="coredns-66bc5c9577-8rgcl" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--8rgcl-" Mar 7 01:16:57.786501 containerd[1718]: 2026-03-07 01:16:57.594 [INFO][5611] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="59704b3f1b9c55764d4d77f606efd28b1b90128a029b1af85e86a4d8d5fee316" Namespace="kube-system" Pod="coredns-66bc5c9577-8rgcl" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--8rgcl-eth0" Mar 7 01:16:57.786501 containerd[1718]: 2026-03-07 01:16:57.647 [INFO][5636] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="59704b3f1b9c55764d4d77f606efd28b1b90128a029b1af85e86a4d8d5fee316" HandleID="k8s-pod-network.59704b3f1b9c55764d4d77f606efd28b1b90128a029b1af85e86a4d8d5fee316" Workload="ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--8rgcl-eth0" Mar 7 01:16:57.786501 containerd[1718]: 2026-03-07 01:16:57.662 [INFO][5636] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="59704b3f1b9c55764d4d77f606efd28b1b90128a029b1af85e86a4d8d5fee316" HandleID="k8s-pod-network.59704b3f1b9c55764d4d77f606efd28b1b90128a029b1af85e86a4d8d5fee316" Workload="ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--8rgcl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fdaf0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-1070eafa86", "pod":"coredns-66bc5c9577-8rgcl", "timestamp":"2026-03-07 01:16:57.647130837 +0000 UTC"}, Hostname:"ci-4081.3.6-n-1070eafa86", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004d0f20)} Mar 7 01:16:57.786501 containerd[1718]: 2026-03-07 01:16:57.663 [INFO][5636] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:16:57.786501 containerd[1718]: 2026-03-07 01:16:57.663 [INFO][5636] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:16:57.786501 containerd[1718]: 2026-03-07 01:16:57.663 [INFO][5636] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-1070eafa86' Mar 7 01:16:57.786501 containerd[1718]: 2026-03-07 01:16:57.667 [INFO][5636] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.59704b3f1b9c55764d4d77f606efd28b1b90128a029b1af85e86a4d8d5fee316" host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:57.786501 containerd[1718]: 2026-03-07 01:16:57.675 [INFO][5636] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:57.786501 containerd[1718]: 2026-03-07 01:16:57.686 [INFO][5636] ipam/ipam.go 526: Trying affinity for 192.168.33.192/26 host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:57.786501 containerd[1718]: 2026-03-07 01:16:57.689 [INFO][5636] ipam/ipam.go 160: Attempting to load block cidr=192.168.33.192/26 host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:57.786501 containerd[1718]: 2026-03-07 01:16:57.693 [INFO][5636] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.33.192/26 host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:57.786501 containerd[1718]: 2026-03-07 01:16:57.693 [INFO][5636] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.33.192/26 handle="k8s-pod-network.59704b3f1b9c55764d4d77f606efd28b1b90128a029b1af85e86a4d8d5fee316" host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:57.786501 containerd[1718]: 2026-03-07 01:16:57.695 [INFO][5636] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.59704b3f1b9c55764d4d77f606efd28b1b90128a029b1af85e86a4d8d5fee316 Mar 7 01:16:57.786501 containerd[1718]: 2026-03-07 01:16:57.702 [INFO][5636] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.33.192/26 handle="k8s-pod-network.59704b3f1b9c55764d4d77f606efd28b1b90128a029b1af85e86a4d8d5fee316" host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:57.786501 containerd[1718]: 2026-03-07 01:16:57.719 [INFO][5636] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.33.199/26] block=192.168.33.192/26 handle="k8s-pod-network.59704b3f1b9c55764d4d77f606efd28b1b90128a029b1af85e86a4d8d5fee316" host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:57.786501 containerd[1718]: 2026-03-07 01:16:57.720 [INFO][5636] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.33.199/26] handle="k8s-pod-network.59704b3f1b9c55764d4d77f606efd28b1b90128a029b1af85e86a4d8d5fee316" host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:57.786501 containerd[1718]: 2026-03-07 01:16:57.720 [INFO][5636] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:16:57.786501 containerd[1718]: 2026-03-07 01:16:57.720 [INFO][5636] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.33.199/26] IPv6=[] ContainerID="59704b3f1b9c55764d4d77f606efd28b1b90128a029b1af85e86a4d8d5fee316" HandleID="k8s-pod-network.59704b3f1b9c55764d4d77f606efd28b1b90128a029b1af85e86a4d8d5fee316" Workload="ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--8rgcl-eth0" Mar 7 01:16:57.788288 containerd[1718]: 2026-03-07 01:16:57.725 [INFO][5611] cni-plugin/k8s.go 418: Populated endpoint ContainerID="59704b3f1b9c55764d4d77f606efd28b1b90128a029b1af85e86a4d8d5fee316" Namespace="kube-system" Pod="coredns-66bc5c9577-8rgcl" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--8rgcl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--8rgcl-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a2c57fb9-f299-436c-8599-34cdb3343b9a", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-1070eafa86", ContainerID:"", Pod:"coredns-66bc5c9577-8rgcl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibb4a1f0d3ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:16:57.788288 containerd[1718]: 2026-03-07 01:16:57.726 [INFO][5611] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.199/32] ContainerID="59704b3f1b9c55764d4d77f606efd28b1b90128a029b1af85e86a4d8d5fee316" Namespace="kube-system" Pod="coredns-66bc5c9577-8rgcl" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--8rgcl-eth0" Mar 7 01:16:57.788288 containerd[1718]: 2026-03-07 01:16:57.726 [INFO][5611] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibb4a1f0d3ff ContainerID="59704b3f1b9c55764d4d77f606efd28b1b90128a029b1af85e86a4d8d5fee316" Namespace="kube-system" Pod="coredns-66bc5c9577-8rgcl" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--8rgcl-eth0" Mar 7 01:16:57.788288 containerd[1718]: 2026-03-07 01:16:57.752 [INFO][5611] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="59704b3f1b9c55764d4d77f606efd28b1b90128a029b1af85e86a4d8d5fee316" Namespace="kube-system" Pod="coredns-66bc5c9577-8rgcl" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--8rgcl-eth0" Mar 7 01:16:57.788288 containerd[1718]: 2026-03-07 01:16:57.760 [INFO][5611] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="59704b3f1b9c55764d4d77f606efd28b1b90128a029b1af85e86a4d8d5fee316" Namespace="kube-system" Pod="coredns-66bc5c9577-8rgcl" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--8rgcl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--8rgcl-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a2c57fb9-f299-436c-8599-34cdb3343b9a", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-1070eafa86", ContainerID:"59704b3f1b9c55764d4d77f606efd28b1b90128a029b1af85e86a4d8d5fee316", Pod:"coredns-66bc5c9577-8rgcl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibb4a1f0d3ff", MAC:"e2:88:fc:35:52:d6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:16:57.788655 containerd[1718]: 2026-03-07 01:16:57.778 [INFO][5611] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="59704b3f1b9c55764d4d77f606efd28b1b90128a029b1af85e86a4d8d5fee316" Namespace="kube-system" Pod="coredns-66bc5c9577-8rgcl" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--8rgcl-eth0" Mar 7 01:16:57.857995 systemd-networkd[1367]: cali661679d389c: Link UP Mar 7 01:16:57.858343 systemd-networkd[1367]: cali661679d389c: Gained carrier Mar 7 01:16:57.881985 containerd[1718]: time="2026-03-07T01:16:57.881798970Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:16:57.881985 containerd[1718]: time="2026-03-07T01:16:57.881862071Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:16:57.881985 containerd[1718]: time="2026-03-07T01:16:57.881883771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:57.883426 containerd[1718]: time="2026-03-07T01:16:57.883383595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:57.894780 containerd[1718]: 2026-03-07 01:16:57.659 [INFO][5620] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--vk5ds-eth0 calico-apiserver-5c79b8d867- calico-system 7e3699ed-bb54-40b3-937a-980b33b5dfb7 1010 0 2026-03-07 01:16:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5c79b8d867 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-1070eafa86 calico-apiserver-5c79b8d867-vk5ds eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali661679d389c [] [] }} ContainerID="e91e7ef5225e466f9d39a83518346e632301a80d9c292a7879b326d8fa587ff2" Namespace="calico-system" Pod="calico-apiserver-5c79b8d867-vk5ds" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--vk5ds-" Mar 7 01:16:57.894780 containerd[1718]: 2026-03-07 01:16:57.659 [INFO][5620] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e91e7ef5225e466f9d39a83518346e632301a80d9c292a7879b326d8fa587ff2" Namespace="calico-system" Pod="calico-apiserver-5c79b8d867-vk5ds" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--vk5ds-eth0" Mar 7 01:16:57.894780 containerd[1718]: 2026-03-07 01:16:57.745 [INFO][5645] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e91e7ef5225e466f9d39a83518346e632301a80d9c292a7879b326d8fa587ff2" HandleID="k8s-pod-network.e91e7ef5225e466f9d39a83518346e632301a80d9c292a7879b326d8fa587ff2" Workload="ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--vk5ds-eth0" Mar 7 01:16:57.894780 containerd[1718]: 2026-03-07 01:16:57.757 [INFO][5645] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e91e7ef5225e466f9d39a83518346e632301a80d9c292a7879b326d8fa587ff2" HandleID="k8s-pod-network.e91e7ef5225e466f9d39a83518346e632301a80d9c292a7879b326d8fa587ff2" Workload="ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--vk5ds-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000380e40), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-1070eafa86", "pod":"calico-apiserver-5c79b8d867-vk5ds", "timestamp":"2026-03-07 01:16:57.745617904 +0000 UTC"}, Hostname:"ci-4081.3.6-n-1070eafa86", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000188dc0)} Mar 7 01:16:57.894780 containerd[1718]: 2026-03-07 01:16:57.757 [INFO][5645] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:16:57.894780 containerd[1718]: 2026-03-07 01:16:57.757 [INFO][5645] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:16:57.894780 containerd[1718]: 2026-03-07 01:16:57.757 [INFO][5645] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-1070eafa86' Mar 7 01:16:57.894780 containerd[1718]: 2026-03-07 01:16:57.768 [INFO][5645] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e91e7ef5225e466f9d39a83518346e632301a80d9c292a7879b326d8fa587ff2" host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:57.894780 containerd[1718]: 2026-03-07 01:16:57.784 [INFO][5645] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:57.894780 containerd[1718]: 2026-03-07 01:16:57.795 [INFO][5645] ipam/ipam.go 526: Trying affinity for 192.168.33.192/26 host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:57.894780 containerd[1718]: 2026-03-07 01:16:57.800 [INFO][5645] ipam/ipam.go 160: Attempting to load block cidr=192.168.33.192/26 host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:57.894780 containerd[1718]: 2026-03-07 01:16:57.805 [INFO][5645] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.33.192/26 host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:57.894780 containerd[1718]: 2026-03-07 01:16:57.806 [INFO][5645] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.33.192/26 handle="k8s-pod-network.e91e7ef5225e466f9d39a83518346e632301a80d9c292a7879b326d8fa587ff2" host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:57.894780 containerd[1718]: 2026-03-07 01:16:57.810 [INFO][5645] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e91e7ef5225e466f9d39a83518346e632301a80d9c292a7879b326d8fa587ff2 Mar 7 01:16:57.894780 containerd[1718]: 2026-03-07 01:16:57.819 [INFO][5645] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.33.192/26 handle="k8s-pod-network.e91e7ef5225e466f9d39a83518346e632301a80d9c292a7879b326d8fa587ff2" host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:57.894780 containerd[1718]: 2026-03-07 01:16:57.833 [INFO][5645] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.33.200/26] block=192.168.33.192/26 handle="k8s-pod-network.e91e7ef5225e466f9d39a83518346e632301a80d9c292a7879b326d8fa587ff2" host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:57.894780 containerd[1718]: 2026-03-07 01:16:57.833 [INFO][5645] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.33.200/26] handle="k8s-pod-network.e91e7ef5225e466f9d39a83518346e632301a80d9c292a7879b326d8fa587ff2" host="ci-4081.3.6-n-1070eafa86" Mar 7 01:16:57.894780 containerd[1718]: 2026-03-07 01:16:57.833 [INFO][5645] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:16:57.894780 containerd[1718]: 2026-03-07 01:16:57.833 [INFO][5645] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.33.200/26] IPv6=[] ContainerID="e91e7ef5225e466f9d39a83518346e632301a80d9c292a7879b326d8fa587ff2" HandleID="k8s-pod-network.e91e7ef5225e466f9d39a83518346e632301a80d9c292a7879b326d8fa587ff2" Workload="ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--vk5ds-eth0" Mar 7 01:16:57.896768 containerd[1718]: 2026-03-07 01:16:57.844 [INFO][5620] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e91e7ef5225e466f9d39a83518346e632301a80d9c292a7879b326d8fa587ff2" Namespace="calico-system" Pod="calico-apiserver-5c79b8d867-vk5ds" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--vk5ds-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--vk5ds-eth0", GenerateName:"calico-apiserver-5c79b8d867-", Namespace:"calico-system", SelfLink:"", UID:"7e3699ed-bb54-40b3-937a-980b33b5dfb7", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c79b8d867", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-1070eafa86", ContainerID:"", Pod:"calico-apiserver-5c79b8d867-vk5ds", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali661679d389c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:16:57.896768 containerd[1718]: 2026-03-07 01:16:57.844 [INFO][5620] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.200/32] ContainerID="e91e7ef5225e466f9d39a83518346e632301a80d9c292a7879b326d8fa587ff2" Namespace="calico-system" Pod="calico-apiserver-5c79b8d867-vk5ds" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--vk5ds-eth0" Mar 7 01:16:57.896768 containerd[1718]: 2026-03-07 01:16:57.844 [INFO][5620] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali661679d389c ContainerID="e91e7ef5225e466f9d39a83518346e632301a80d9c292a7879b326d8fa587ff2" Namespace="calico-system" Pod="calico-apiserver-5c79b8d867-vk5ds" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--vk5ds-eth0" Mar 7 01:16:57.896768 containerd[1718]: 2026-03-07 01:16:57.862 [INFO][5620] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e91e7ef5225e466f9d39a83518346e632301a80d9c292a7879b326d8fa587ff2" Namespace="calico-system" Pod="calico-apiserver-5c79b8d867-vk5ds" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--vk5ds-eth0" Mar 7 01:16:57.896768 containerd[1718]: 2026-03-07 01:16:57.866 [INFO][5620] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e91e7ef5225e466f9d39a83518346e632301a80d9c292a7879b326d8fa587ff2" Namespace="calico-system" Pod="calico-apiserver-5c79b8d867-vk5ds" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--vk5ds-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--vk5ds-eth0", GenerateName:"calico-apiserver-5c79b8d867-", Namespace:"calico-system", SelfLink:"", UID:"7e3699ed-bb54-40b3-937a-980b33b5dfb7", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c79b8d867", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-1070eafa86", ContainerID:"e91e7ef5225e466f9d39a83518346e632301a80d9c292a7879b326d8fa587ff2", Pod:"calico-apiserver-5c79b8d867-vk5ds", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali661679d389c", MAC:"0a:fa:b9:a0:a6:fe", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:16:57.896768 containerd[1718]: 2026-03-07 01:16:57.885 [INFO][5620] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e91e7ef5225e466f9d39a83518346e632301a80d9c292a7879b326d8fa587ff2" Namespace="calico-system" Pod="calico-apiserver-5c79b8d867-vk5ds" WorkloadEndpoint="ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--vk5ds-eth0" Mar 7 01:16:57.957748 systemd[1]: Started cri-containerd-59704b3f1b9c55764d4d77f606efd28b1b90128a029b1af85e86a4d8d5fee316.scope - libcontainer container 59704b3f1b9c55764d4d77f606efd28b1b90128a029b1af85e86a4d8d5fee316. Mar 7 01:16:58.000982 containerd[1718]: time="2026-03-07T01:16:57.999890549Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:16:58.000982 containerd[1718]: time="2026-03-07T01:16:57.999951150Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:16:58.000982 containerd[1718]: time="2026-03-07T01:16:57.999981150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:58.001257 containerd[1718]: time="2026-03-07T01:16:58.000152253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:58.055380 systemd[1]: Started cri-containerd-e91e7ef5225e466f9d39a83518346e632301a80d9c292a7879b326d8fa587ff2.scope - libcontainer container e91e7ef5225e466f9d39a83518346e632301a80d9c292a7879b326d8fa587ff2. Mar 7 01:16:58.066135 containerd[1718]: time="2026-03-07T01:16:58.066065401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8rgcl,Uid:a2c57fb9-f299-436c-8599-34cdb3343b9a,Namespace:kube-system,Attempt:1,} returns sandbox id \"59704b3f1b9c55764d4d77f606efd28b1b90128a029b1af85e86a4d8d5fee316\"" Mar 7 01:16:58.105207 containerd[1718]: time="2026-03-07T01:16:58.105165623Z" level=info msg="CreateContainer within sandbox \"59704b3f1b9c55764d4d77f606efd28b1b90128a029b1af85e86a4d8d5fee316\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:16:58.136763 containerd[1718]: time="2026-03-07T01:16:58.136724225Z" level=info msg="CreateContainer within sandbox \"59704b3f1b9c55764d4d77f606efd28b1b90128a029b1af85e86a4d8d5fee316\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7f326f6e9f96d33b7323a7f2b7df31d8d9b2e790ff54d6b53185487a6329d1ab\"" Mar 7 01:16:58.140384 containerd[1718]: time="2026-03-07T01:16:58.140319282Z" level=info msg="StartContainer for \"7f326f6e9f96d33b7323a7f2b7df31d8d9b2e790ff54d6b53185487a6329d1ab\"" Mar 7 01:16:58.146752 containerd[1718]: time="2026-03-07T01:16:58.146701584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c79b8d867-vk5ds,Uid:7e3699ed-bb54-40b3-937a-980b33b5dfb7,Namespace:calico-system,Attempt:1,} returns sandbox id \"e91e7ef5225e466f9d39a83518346e632301a80d9c292a7879b326d8fa587ff2\"" Mar 7 01:16:58.160997 containerd[1718]: time="2026-03-07T01:16:58.160954610Z" level=info msg="CreateContainer within sandbox \"e91e7ef5225e466f9d39a83518346e632301a80d9c292a7879b326d8fa587ff2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 01:16:58.199550 systemd[1]: Started cri-containerd-7f326f6e9f96d33b7323a7f2b7df31d8d9b2e790ff54d6b53185487a6329d1ab.scope - libcontainer container 7f326f6e9f96d33b7323a7f2b7df31d8d9b2e790ff54d6b53185487a6329d1ab. Mar 7 01:16:58.205198 containerd[1718]: time="2026-03-07T01:16:58.205157714Z" level=info msg="CreateContainer within sandbox \"e91e7ef5225e466f9d39a83518346e632301a80d9c292a7879b326d8fa587ff2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7b35df55a4991d2be741e15287d3cc10202f209be2704cf5d002a2c1e2876808\"" Mar 7 01:16:58.209725 containerd[1718]: time="2026-03-07T01:16:58.209683986Z" level=info msg="StartContainer for \"7b35df55a4991d2be741e15287d3cc10202f209be2704cf5d002a2c1e2876808\"" Mar 7 01:16:58.261559 containerd[1718]: time="2026-03-07T01:16:58.260963701Z" level=info msg="StartContainer for \"7f326f6e9f96d33b7323a7f2b7df31d8d9b2e790ff54d6b53185487a6329d1ab\" returns successfully" Mar 7 01:16:58.270810 systemd[1]: Started cri-containerd-7b35df55a4991d2be741e15287d3cc10202f209be2704cf5d002a2c1e2876808.scope - libcontainer container 7b35df55a4991d2be741e15287d3cc10202f209be2704cf5d002a2c1e2876808. Mar 7 01:16:58.354676 containerd[1718]: time="2026-03-07T01:16:58.354631291Z" level=info msg="StartContainer for \"7b35df55a4991d2be741e15287d3cc10202f209be2704cf5d002a2c1e2876808\" returns successfully" Mar 7 01:16:58.518764 kubelet[3179]: I0307 01:16:58.518622 3179 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-8rgcl" podStartSLOduration=55.518599099 podStartE2EDuration="55.518599099s" podCreationTimestamp="2026-03-07 01:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:16:58.485193468 +0000 UTC m=+63.450363601" watchObservedRunningTime="2026-03-07 01:16:58.518599099 +0000 UTC m=+63.483769232" Mar 7 01:16:58.595957 containerd[1718]: time="2026-03-07T01:16:58.595686725Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:58.598986 containerd[1718]: time="2026-03-07T01:16:58.598916977Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 7 01:16:58.603775 containerd[1718]: time="2026-03-07T01:16:58.602343031Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:58.607771 containerd[1718]: time="2026-03-07T01:16:58.607723017Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:58.609090 containerd[1718]: time="2026-03-07T01:16:58.608504029Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 4.745145518s" Mar 7 01:16:58.609348 containerd[1718]: time="2026-03-07T01:16:58.609328142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 7 01:16:58.611897 containerd[1718]: time="2026-03-07T01:16:58.611868883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 7 01:16:58.618800 containerd[1718]: time="2026-03-07T01:16:58.618751192Z" level=info msg="CreateContainer within sandbox \"9d1965fca67dd25290b973f19484bf3ff611edd0cc6a84f779356c7008a25306\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 7 01:16:58.656401 containerd[1718]: time="2026-03-07T01:16:58.656358191Z" level=info msg="CreateContainer within sandbox \"9d1965fca67dd25290b973f19484bf3ff611edd0cc6a84f779356c7008a25306\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"2406ae0f18733cb66b53399c6cd82e925055992fa74c6787c4b5a062b8b12479\"" Mar 7 01:16:58.658824 containerd[1718]: time="2026-03-07T01:16:58.658790029Z" level=info msg="StartContainer for \"2406ae0f18733cb66b53399c6cd82e925055992fa74c6787c4b5a062b8b12479\"" Mar 7 01:16:58.717295 systemd[1]: Started cri-containerd-2406ae0f18733cb66b53399c6cd82e925055992fa74c6787c4b5a062b8b12479.scope - libcontainer container 2406ae0f18733cb66b53399c6cd82e925055992fa74c6787c4b5a062b8b12479. Mar 7 01:16:58.807215 containerd[1718]: time="2026-03-07T01:16:58.804809252Z" level=info msg="StartContainer for \"2406ae0f18733cb66b53399c6cd82e925055992fa74c6787c4b5a062b8b12479\" returns successfully" Mar 7 01:16:59.342276 systemd-networkd[1367]: calibb4a1f0d3ff: Gained IPv6LL Mar 7 01:16:59.342654 systemd-networkd[1367]: cali661679d389c: Gained IPv6LL Mar 7 01:16:59.497361 kubelet[3179]: I0307 01:16:59.497119 3179 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-5c79b8d867-vk5ds" podStartSLOduration=44.497075963 podStartE2EDuration="44.497075963s" podCreationTimestamp="2026-03-07 01:16:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:16:58.548010467 +0000 UTC m=+63.513180500" watchObservedRunningTime="2026-03-07 01:16:59.497075963 +0000 UTC m=+64.462246096" Mar 7 01:16:59.823566 kubelet[3179]: I0307 01:16:59.823490 3179 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-4f4st" podStartSLOduration=33.08656383 podStartE2EDuration="44.823467455s" podCreationTimestamp="2026-03-07 01:16:15 +0000 UTC" firstStartedPulling="2026-03-07 01:16:46.874043243 +0000 UTC m=+51.839213276" lastFinishedPulling="2026-03-07 01:16:58.610946768 +0000 UTC m=+63.576116901" observedRunningTime="2026-03-07 01:16:59.499349599 +0000 UTC m=+64.464519632" watchObservedRunningTime="2026-03-07 01:16:59.823467455 +0000 UTC m=+64.788637588" Mar 7 01:17:00.043117 containerd[1718]: time="2026-03-07T01:17:00.043049948Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:00.047337 containerd[1718]: time="2026-03-07T01:17:00.047126413Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 7 01:17:00.052268 containerd[1718]: time="2026-03-07T01:17:00.051338380Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:00.056297 containerd[1718]: time="2026-03-07T01:17:00.056254558Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:00.056960 containerd[1718]: time="2026-03-07T01:17:00.056918868Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.444896283s" Mar 7 01:17:00.057081 containerd[1718]: time="2026-03-07T01:17:00.056965469Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 7 01:17:00.058437 containerd[1718]: time="2026-03-07T01:17:00.058407892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 7 01:17:00.069701 containerd[1718]: time="2026-03-07T01:17:00.069666471Z" level=info msg="CreateContainer within sandbox \"4f85b5464c3f7c002d3980e90171acf64b4cc34e8440de60e05286775f6c7f9c\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 7 01:17:00.121807 containerd[1718]: time="2026-03-07T01:17:00.121540596Z" level=info msg="CreateContainer within sandbox \"4f85b5464c3f7c002d3980e90171acf64b4cc34e8440de60e05286775f6c7f9c\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"7ff3432d7b867a390400857b7b0a56fe2698e351a58ca7bc00de11264df14216\"" Mar 7 01:17:00.124217 containerd[1718]: time="2026-03-07T01:17:00.122548812Z" level=info msg="StartContainer for \"7ff3432d7b867a390400857b7b0a56fe2698e351a58ca7bc00de11264df14216\"" Mar 7 01:17:00.162214 systemd[1]: run-containerd-runc-k8s.io-7ff3432d7b867a390400857b7b0a56fe2698e351a58ca7bc00de11264df14216-runc.vO1A9q.mount: Deactivated successfully. Mar 7 01:17:00.169275 systemd[1]: Started cri-containerd-7ff3432d7b867a390400857b7b0a56fe2698e351a58ca7bc00de11264df14216.scope - libcontainer container 7ff3432d7b867a390400857b7b0a56fe2698e351a58ca7bc00de11264df14216. Mar 7 01:17:00.215650 containerd[1718]: time="2026-03-07T01:17:00.215536491Z" level=info msg="StartContainer for \"7ff3432d7b867a390400857b7b0a56fe2698e351a58ca7bc00de11264df14216\" returns successfully" Mar 7 01:17:01.694687 containerd[1718]: time="2026-03-07T01:17:01.694639919Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:01.697256 containerd[1718]: time="2026-03-07T01:17:01.697069657Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 7 01:17:01.702140 containerd[1718]: time="2026-03-07T01:17:01.700532712Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:01.710023 containerd[1718]: time="2026-03-07T01:17:01.709974362Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:01.711478 containerd[1718]: time="2026-03-07T01:17:01.711437886Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.652990494s" Mar 7 01:17:01.711642 containerd[1718]: time="2026-03-07T01:17:01.711621089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 7 01:17:01.712940 containerd[1718]: time="2026-03-07T01:17:01.712916609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 7 01:17:01.718530 containerd[1718]: time="2026-03-07T01:17:01.718494098Z" level=info msg="CreateContainer within sandbox \"f7e3eac38d6332f46caacee41e5a06c92454148a38e3098184376c25c0743ccf\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 7 01:17:01.760544 containerd[1718]: time="2026-03-07T01:17:01.760500766Z" level=info msg="CreateContainer within sandbox \"f7e3eac38d6332f46caacee41e5a06c92454148a38e3098184376c25c0743ccf\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"7e980240dc106d8aa25d07572b037a1af111277e0a6c3ca9b8e2306f9e795885\"" Mar 7 01:17:01.761584 containerd[1718]: time="2026-03-07T01:17:01.761466881Z" level=info msg="StartContainer for \"7e980240dc106d8aa25d07572b037a1af111277e0a6c3ca9b8e2306f9e795885\"" Mar 7 01:17:01.801284 systemd[1]: Started cri-containerd-7e980240dc106d8aa25d07572b037a1af111277e0a6c3ca9b8e2306f9e795885.scope - libcontainer container 7e980240dc106d8aa25d07572b037a1af111277e0a6c3ca9b8e2306f9e795885. Mar 7 01:17:01.835564 containerd[1718]: time="2026-03-07T01:17:01.835520159Z" level=info msg="StartContainer for \"7e980240dc106d8aa25d07572b037a1af111277e0a6c3ca9b8e2306f9e795885\" returns successfully" Mar 7 01:17:03.551499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2763747459.mount: Deactivated successfully. Mar 7 01:17:03.606252 containerd[1718]: time="2026-03-07T01:17:03.606200824Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:03.609164 containerd[1718]: time="2026-03-07T01:17:03.608968768Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 7 01:17:03.612299 containerd[1718]: time="2026-03-07T01:17:03.612143919Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:03.617573 containerd[1718]: time="2026-03-07T01:17:03.617419203Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:03.618732 containerd[1718]: time="2026-03-07T01:17:03.618256916Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.905204205s" Mar 7 01:17:03.618732 containerd[1718]: time="2026-03-07T01:17:03.618375618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 7 01:17:03.621452 containerd[1718]: time="2026-03-07T01:17:03.621422567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 7 01:17:03.626651 containerd[1718]: time="2026-03-07T01:17:03.626622849Z" level=info msg="CreateContainer within sandbox \"4f85b5464c3f7c002d3980e90171acf64b4cc34e8440de60e05286775f6c7f9c\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 7 01:17:03.661462 containerd[1718]: time="2026-03-07T01:17:03.661417103Z" level=info msg="CreateContainer within sandbox \"4f85b5464c3f7c002d3980e90171acf64b4cc34e8440de60e05286775f6c7f9c\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"20488d852df21f5c472bc93ff298a009f5d7b4a1930e5f28573dfc2cb7507ff9\"" Mar 7 01:17:03.663558 containerd[1718]: time="2026-03-07T01:17:03.662412919Z" level=info msg="StartContainer for \"20488d852df21f5c472bc93ff298a009f5d7b4a1930e5f28573dfc2cb7507ff9\"" Mar 7 01:17:03.696297 systemd[1]: Started cri-containerd-20488d852df21f5c472bc93ff298a009f5d7b4a1930e5f28573dfc2cb7507ff9.scope - libcontainer container 20488d852df21f5c472bc93ff298a009f5d7b4a1930e5f28573dfc2cb7507ff9. Mar 7 01:17:03.748802 containerd[1718]: time="2026-03-07T01:17:03.748760192Z" level=info msg="StartContainer for \"20488d852df21f5c472bc93ff298a009f5d7b4a1930e5f28573dfc2cb7507ff9\" returns successfully" Mar 7 01:17:04.498131 kubelet[3179]: I0307 01:17:04.497557 3179 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-775dc59d49-2vc9x" podStartSLOduration=2.292687355 podStartE2EDuration="18.497428943s" podCreationTimestamp="2026-03-07 01:16:46 +0000 UTC" firstStartedPulling="2026-03-07 01:16:47.414788948 +0000 UTC m=+52.379958981" lastFinishedPulling="2026-03-07 01:17:03.619530436 +0000 UTC m=+68.584700569" observedRunningTime="2026-03-07 01:17:04.496315726 +0000 UTC m=+69.461485759" watchObservedRunningTime="2026-03-07 01:17:04.497428943 +0000 UTC m=+69.462598976" Mar 7 01:17:04.550879 kubelet[3179]: I0307 01:17:04.550840 3179 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:17:05.326205 containerd[1718]: time="2026-03-07T01:17:05.326150693Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:05.329148 containerd[1718]: time="2026-03-07T01:17:05.328892636Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 7 01:17:05.332114 containerd[1718]: time="2026-03-07T01:17:05.332019385Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:05.337644 containerd[1718]: time="2026-03-07T01:17:05.336849060Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:05.338024 containerd[1718]: time="2026-03-07T01:17:05.337986678Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.716176905s" Mar 7 01:17:05.338183 containerd[1718]: time="2026-03-07T01:17:05.338161981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 7 01:17:05.354437 containerd[1718]: time="2026-03-07T01:17:05.354393134Z" level=info msg="CreateContainer within sandbox \"f7e3eac38d6332f46caacee41e5a06c92454148a38e3098184376c25c0743ccf\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 7 01:17:05.387137 containerd[1718]: time="2026-03-07T01:17:05.387084345Z" level=info msg="CreateContainer within sandbox \"f7e3eac38d6332f46caacee41e5a06c92454148a38e3098184376c25c0743ccf\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"da8292d99dc7240be9e512343b6c4d054772e8e41b0263a76436a02c0e3525aa\"" Mar 7 01:17:05.388071 containerd[1718]: time="2026-03-07T01:17:05.387943558Z" level=info msg="StartContainer for \"da8292d99dc7240be9e512343b6c4d054772e8e41b0263a76436a02c0e3525aa\"" Mar 7 01:17:05.428289 systemd[1]: Started cri-containerd-da8292d99dc7240be9e512343b6c4d054772e8e41b0263a76436a02c0e3525aa.scope - libcontainer container da8292d99dc7240be9e512343b6c4d054772e8e41b0263a76436a02c0e3525aa. Mar 7 01:17:05.460013 containerd[1718]: time="2026-03-07T01:17:05.459966684Z" level=info msg="StartContainer for \"da8292d99dc7240be9e512343b6c4d054772e8e41b0263a76436a02c0e3525aa\" returns successfully" Mar 7 01:17:06.235184 kubelet[3179]: I0307 01:17:06.235150 3179 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 7 01:17:06.235184 kubelet[3179]: I0307 01:17:06.235190 3179 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 7 01:17:13.041448 systemd[1]: Started sshd@7-10.200.8.30:22-10.200.16.10:59288.service - OpenSSH per-connection server daemon (10.200.16.10:59288). Mar 7 01:17:13.665146 sshd[6178]: Accepted publickey for core from 10.200.16.10 port 59288 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:17:13.666143 sshd[6178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:17:13.671694 systemd-logind[1702]: New session 10 of user core. Mar 7 01:17:13.677303 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 7 01:17:14.182200 sshd[6178]: pam_unix(sshd:session): session closed for user core Mar 7 01:17:14.185171 systemd[1]: sshd@7-10.200.8.30:22-10.200.16.10:59288.service: Deactivated successfully. Mar 7 01:17:14.187763 systemd[1]: session-10.scope: Deactivated successfully. Mar 7 01:17:14.189840 systemd-logind[1702]: Session 10 logged out. Waiting for processes to exit. Mar 7 01:17:14.190817 systemd-logind[1702]: Removed session 10. Mar 7 01:17:16.399861 systemd[1]: run-containerd-runc-k8s.io-92fe672a76044d4d414821e5ba01d4e35498713b867fd60c0bbd37b259397d51-runc.a8xT90.mount: Deactivated successfully. Mar 7 01:17:16.494784 kubelet[3179]: I0307 01:17:16.492840 3179 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-gkvkn" podStartSLOduration=51.350292362 podStartE2EDuration="1m0.492817691s" podCreationTimestamp="2026-03-07 01:16:16 +0000 UTC" firstStartedPulling="2026-03-07 01:16:56.201066636 +0000 UTC m=+61.166236669" lastFinishedPulling="2026-03-07 01:17:05.343591965 +0000 UTC m=+70.308761998" observedRunningTime="2026-03-07 01:17:05.504096073 +0000 UTC m=+70.469266206" watchObservedRunningTime="2026-03-07 01:17:16.492817691 +0000 UTC m=+81.457987824" Mar 7 01:17:19.292875 systemd[1]: Started sshd@8-10.200.8.30:22-10.200.16.10:59292.service - OpenSSH per-connection server daemon (10.200.16.10:59292). Mar 7 01:17:19.921562 sshd[6224]: Accepted publickey for core from 10.200.16.10 port 59292 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:17:19.922205 sshd[6224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:17:19.927258 systemd-logind[1702]: New session 11 of user core. Mar 7 01:17:19.932265 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 7 01:17:20.422272 sshd[6224]: pam_unix(sshd:session): session closed for user core Mar 7 01:17:20.425323 systemd[1]: sshd@8-10.200.8.30:22-10.200.16.10:59292.service: Deactivated successfully. Mar 7 01:17:20.427787 systemd[1]: session-11.scope: Deactivated successfully. Mar 7 01:17:20.429892 systemd-logind[1702]: Session 11 logged out. Waiting for processes to exit. Mar 7 01:17:20.431316 systemd-logind[1702]: Removed session 11. Mar 7 01:17:25.539345 systemd[1]: Started sshd@9-10.200.8.30:22-10.200.16.10:34904.service - OpenSSH per-connection server daemon (10.200.16.10:34904). Mar 7 01:17:26.171616 sshd[6276]: Accepted publickey for core from 10.200.16.10 port 34904 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:17:26.173205 sshd[6276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:17:26.177404 systemd-logind[1702]: New session 12 of user core. Mar 7 01:17:26.186279 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 7 01:17:26.676318 sshd[6276]: pam_unix(sshd:session): session closed for user core Mar 7 01:17:26.680206 systemd[1]: sshd@9-10.200.8.30:22-10.200.16.10:34904.service: Deactivated successfully. Mar 7 01:17:26.682988 systemd[1]: session-12.scope: Deactivated successfully. Mar 7 01:17:26.685665 systemd-logind[1702]: Session 12 logged out. Waiting for processes to exit. Mar 7 01:17:26.686641 systemd-logind[1702]: Removed session 12. Mar 7 01:17:31.497338 systemd[1]: run-containerd-runc-k8s.io-2406ae0f18733cb66b53399c6cd82e925055992fa74c6787c4b5a062b8b12479-runc.YQdXu8.mount: Deactivated successfully. Mar 7 01:17:31.792424 systemd[1]: Started sshd@10-10.200.8.30:22-10.200.16.10:39790.service - OpenSSH per-connection server daemon (10.200.16.10:39790). Mar 7 01:17:32.415495 sshd[6324]: Accepted publickey for core from 10.200.16.10 port 39790 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:17:32.417025 sshd[6324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:17:32.422176 systemd-logind[1702]: New session 13 of user core. Mar 7 01:17:32.426405 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 7 01:17:32.922487 sshd[6324]: pam_unix(sshd:session): session closed for user core Mar 7 01:17:32.927746 systemd-logind[1702]: Session 13 logged out. Waiting for processes to exit. Mar 7 01:17:32.927886 systemd[1]: sshd@10-10.200.8.30:22-10.200.16.10:39790.service: Deactivated successfully. Mar 7 01:17:32.931289 systemd[1]: session-13.scope: Deactivated successfully. Mar 7 01:17:32.932681 systemd-logind[1702]: Removed session 13. Mar 7 01:17:38.049412 systemd[1]: Started sshd@11-10.200.8.30:22-10.200.16.10:39800.service - OpenSSH per-connection server daemon (10.200.16.10:39800). Mar 7 01:17:38.698423 sshd[6359]: Accepted publickey for core from 10.200.16.10 port 39800 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:17:38.699968 sshd[6359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:17:38.705454 systemd-logind[1702]: New session 14 of user core. Mar 7 01:17:38.709587 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 7 01:17:39.203874 sshd[6359]: pam_unix(sshd:session): session closed for user core Mar 7 01:17:39.207239 systemd[1]: sshd@11-10.200.8.30:22-10.200.16.10:39800.service: Deactivated successfully. Mar 7 01:17:39.209731 systemd[1]: session-14.scope: Deactivated successfully. Mar 7 01:17:39.211445 systemd-logind[1702]: Session 14 logged out. Waiting for processes to exit. Mar 7 01:17:39.212766 systemd-logind[1702]: Removed session 14. Mar 7 01:17:39.319457 systemd[1]: Started sshd@12-10.200.8.30:22-10.200.16.10:39810.service - OpenSSH per-connection server daemon (10.200.16.10:39810). Mar 7 01:17:39.942666 sshd[6376]: Accepted publickey for core from 10.200.16.10 port 39810 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:17:39.944301 sshd[6376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:17:39.951231 systemd-logind[1702]: New session 15 of user core. Mar 7 01:17:39.954279 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 7 01:17:40.506772 sshd[6376]: pam_unix(sshd:session): session closed for user core Mar 7 01:17:40.510782 systemd[1]: sshd@12-10.200.8.30:22-10.200.16.10:39810.service: Deactivated successfully. Mar 7 01:17:40.513574 systemd[1]: session-15.scope: Deactivated successfully. Mar 7 01:17:40.514438 systemd-logind[1702]: Session 15 logged out. Waiting for processes to exit. Mar 7 01:17:40.516086 systemd-logind[1702]: Removed session 15. Mar 7 01:17:40.622476 systemd[1]: Started sshd@13-10.200.8.30:22-10.200.16.10:37626.service - OpenSSH per-connection server daemon (10.200.16.10:37626). Mar 7 01:17:41.249148 sshd[6387]: Accepted publickey for core from 10.200.16.10 port 37626 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:17:41.250342 sshd[6387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:17:41.255553 systemd-logind[1702]: New session 16 of user core. Mar 7 01:17:41.263274 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 7 01:17:41.758135 sshd[6387]: pam_unix(sshd:session): session closed for user core Mar 7 01:17:41.761983 systemd[1]: sshd@13-10.200.8.30:22-10.200.16.10:37626.service: Deactivated successfully. Mar 7 01:17:41.764158 systemd[1]: session-16.scope: Deactivated successfully. Mar 7 01:17:41.764960 systemd-logind[1702]: Session 16 logged out. Waiting for processes to exit. Mar 7 01:17:41.766220 systemd-logind[1702]: Removed session 16. Mar 7 01:17:46.388398 systemd[1]: run-containerd-runc-k8s.io-92fe672a76044d4d414821e5ba01d4e35498713b867fd60c0bbd37b259397d51-runc.5vQvh5.mount: Deactivated successfully. Mar 7 01:17:46.866017 systemd[1]: Started sshd@14-10.200.8.30:22-10.200.16.10:37638.service - OpenSSH per-connection server daemon (10.200.16.10:37638). Mar 7 01:17:47.497538 sshd[6420]: Accepted publickey for core from 10.200.16.10 port 37638 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:17:47.499089 sshd[6420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:17:47.503157 systemd-logind[1702]: New session 17 of user core. Mar 7 01:17:47.510273 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 7 01:17:47.998195 sshd[6420]: pam_unix(sshd:session): session closed for user core Mar 7 01:17:48.002889 systemd-logind[1702]: Session 17 logged out. Waiting for processes to exit. Mar 7 01:17:48.003319 systemd[1]: sshd@14-10.200.8.30:22-10.200.16.10:37638.service: Deactivated successfully. Mar 7 01:17:48.006025 systemd[1]: session-17.scope: Deactivated successfully. Mar 7 01:17:48.007040 systemd-logind[1702]: Removed session 17. Mar 7 01:17:48.111446 systemd[1]: Started sshd@15-10.200.8.30:22-10.200.16.10:37646.service - OpenSSH per-connection server daemon (10.200.16.10:37646). Mar 7 01:17:48.741514 sshd[6433]: Accepted publickey for core from 10.200.16.10 port 37646 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:17:48.743188 sshd[6433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:17:48.752816 systemd-logind[1702]: New session 18 of user core. Mar 7 01:17:48.757334 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 7 01:17:49.406087 sshd[6433]: pam_unix(sshd:session): session closed for user core Mar 7 01:17:49.410904 systemd-logind[1702]: Session 18 logged out. Waiting for processes to exit. Mar 7 01:17:49.411865 systemd[1]: sshd@15-10.200.8.30:22-10.200.16.10:37646.service: Deactivated successfully. Mar 7 01:17:49.414547 systemd[1]: session-18.scope: Deactivated successfully. Mar 7 01:17:49.415995 systemd-logind[1702]: Removed session 18. Mar 7 01:17:49.523816 systemd[1]: Started sshd@16-10.200.8.30:22-10.200.16.10:37662.service - OpenSSH per-connection server daemon (10.200.16.10:37662). Mar 7 01:17:50.146888 sshd[6444]: Accepted publickey for core from 10.200.16.10 port 37662 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:17:50.148504 sshd[6444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:17:50.153256 systemd-logind[1702]: New session 19 of user core. Mar 7 01:17:50.157261 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 7 01:17:51.300289 sshd[6444]: pam_unix(sshd:session): session closed for user core Mar 7 01:17:51.304332 systemd-logind[1702]: Session 19 logged out. Waiting for processes to exit. Mar 7 01:17:51.305166 systemd[1]: sshd@16-10.200.8.30:22-10.200.16.10:37662.service: Deactivated successfully. Mar 7 01:17:51.307704 systemd[1]: session-19.scope: Deactivated successfully. Mar 7 01:17:51.309261 systemd-logind[1702]: Removed session 19. Mar 7 01:17:51.416567 systemd[1]: Started sshd@17-10.200.8.30:22-10.200.16.10:45248.service - OpenSSH per-connection server daemon (10.200.16.10:45248). Mar 7 01:17:52.038785 sshd[6468]: Accepted publickey for core from 10.200.16.10 port 45248 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:17:52.040464 sshd[6468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:17:52.045737 systemd-logind[1702]: New session 20 of user core. Mar 7 01:17:52.052297 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 7 01:17:52.657374 sshd[6468]: pam_unix(sshd:session): session closed for user core Mar 7 01:17:52.661181 systemd[1]: sshd@17-10.200.8.30:22-10.200.16.10:45248.service: Deactivated successfully. Mar 7 01:17:52.663711 systemd[1]: session-20.scope: Deactivated successfully. Mar 7 01:17:52.664585 systemd-logind[1702]: Session 20 logged out. Waiting for processes to exit. Mar 7 01:17:52.666510 systemd-logind[1702]: Removed session 20. Mar 7 01:17:52.771449 systemd[1]: Started sshd@18-10.200.8.30:22-10.200.16.10:45254.service - OpenSSH per-connection server daemon (10.200.16.10:45254). Mar 7 01:17:53.398713 sshd[6481]: Accepted publickey for core from 10.200.16.10 port 45254 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:17:53.400430 sshd[6481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:17:53.405394 systemd-logind[1702]: New session 21 of user core. Mar 7 01:17:53.413263 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 7 01:17:53.897873 sshd[6481]: pam_unix(sshd:session): session closed for user core Mar 7 01:17:53.901701 systemd-logind[1702]: Session 21 logged out. Waiting for processes to exit. Mar 7 01:17:53.902435 systemd[1]: sshd@18-10.200.8.30:22-10.200.16.10:45254.service: Deactivated successfully. Mar 7 01:17:53.904795 systemd[1]: session-21.scope: Deactivated successfully. Mar 7 01:17:53.906146 systemd-logind[1702]: Removed session 21. Mar 7 01:17:57.165518 containerd[1718]: time="2026-03-07T01:17:57.165468617Z" level=info msg="StopPodSandbox for \"2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0\"" Mar 7 01:17:57.232633 containerd[1718]: 2026-03-07 01:17:57.199 [WARNING][6540] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--1070eafa86-k8s-csi--node--driver--gkvkn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"28e408df-d765-4f81-83a6-862639ee589c", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-1070eafa86", ContainerID:"f7e3eac38d6332f46caacee41e5a06c92454148a38e3098184376c25c0743ccf", Pod:"csi-node-driver-gkvkn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.33.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4c483475f02", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:57.232633 containerd[1718]: 2026-03-07 01:17:57.200 [INFO][6540] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0" Mar 7 01:17:57.232633 containerd[1718]: 2026-03-07 01:17:57.200 [INFO][6540] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0" iface="eth0" netns="" Mar 7 01:17:57.232633 containerd[1718]: 2026-03-07 01:17:57.200 [INFO][6540] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0" Mar 7 01:17:57.232633 containerd[1718]: 2026-03-07 01:17:57.200 [INFO][6540] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0" Mar 7 01:17:57.232633 containerd[1718]: 2026-03-07 01:17:57.223 [INFO][6547] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0" HandleID="k8s-pod-network.2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0" Workload="ci--4081.3.6--n--1070eafa86-k8s-csi--node--driver--gkvkn-eth0" Mar 7 01:17:57.232633 containerd[1718]: 2026-03-07 01:17:57.223 [INFO][6547] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:57.232633 containerd[1718]: 2026-03-07 01:17:57.223 [INFO][6547] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:57.232633 containerd[1718]: 2026-03-07 01:17:57.228 [WARNING][6547] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0" HandleID="k8s-pod-network.2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0" Workload="ci--4081.3.6--n--1070eafa86-k8s-csi--node--driver--gkvkn-eth0" Mar 7 01:17:57.232633 containerd[1718]: 2026-03-07 01:17:57.228 [INFO][6547] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0" HandleID="k8s-pod-network.2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0" Workload="ci--4081.3.6--n--1070eafa86-k8s-csi--node--driver--gkvkn-eth0" Mar 7 01:17:57.232633 containerd[1718]: 2026-03-07 01:17:57.230 [INFO][6547] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:57.232633 containerd[1718]: 2026-03-07 01:17:57.231 [INFO][6540] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0" Mar 7 01:17:57.233540 containerd[1718]: time="2026-03-07T01:17:57.232631959Z" level=info msg="TearDown network for sandbox \"2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0\" successfully" Mar 7 01:17:57.233540 containerd[1718]: time="2026-03-07T01:17:57.232668760Z" level=info msg="StopPodSandbox for \"2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0\" returns successfully" Mar 7 01:17:57.233540 containerd[1718]: time="2026-03-07T01:17:57.233217868Z" level=info msg="RemovePodSandbox for \"2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0\"" Mar 7 01:17:57.233540 containerd[1718]: time="2026-03-07T01:17:57.233253669Z" level=info msg="Forcibly stopping sandbox \"2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0\"" Mar 7 01:17:57.302966 containerd[1718]: 2026-03-07 01:17:57.268 [WARNING][6562] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--1070eafa86-k8s-csi--node--driver--gkvkn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"28e408df-d765-4f81-83a6-862639ee589c", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-1070eafa86", ContainerID:"f7e3eac38d6332f46caacee41e5a06c92454148a38e3098184376c25c0743ccf", Pod:"csi-node-driver-gkvkn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.33.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4c483475f02", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:57.302966 containerd[1718]: 2026-03-07 01:17:57.269 [INFO][6562] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0" Mar 7 01:17:57.302966 containerd[1718]: 2026-03-07 01:17:57.269 [INFO][6562] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0" iface="eth0" netns="" Mar 7 01:17:57.302966 containerd[1718]: 2026-03-07 01:17:57.269 [INFO][6562] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0" Mar 7 01:17:57.302966 containerd[1718]: 2026-03-07 01:17:57.269 [INFO][6562] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0" Mar 7 01:17:57.302966 containerd[1718]: 2026-03-07 01:17:57.290 [INFO][6569] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0" HandleID="k8s-pod-network.2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0" Workload="ci--4081.3.6--n--1070eafa86-k8s-csi--node--driver--gkvkn-eth0" Mar 7 01:17:57.302966 containerd[1718]: 2026-03-07 01:17:57.290 [INFO][6569] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:57.302966 containerd[1718]: 2026-03-07 01:17:57.290 [INFO][6569] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:57.302966 containerd[1718]: 2026-03-07 01:17:57.298 [WARNING][6569] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0" HandleID="k8s-pod-network.2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0" Workload="ci--4081.3.6--n--1070eafa86-k8s-csi--node--driver--gkvkn-eth0" Mar 7 01:17:57.302966 containerd[1718]: 2026-03-07 01:17:57.299 [INFO][6569] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0" HandleID="k8s-pod-network.2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0" Workload="ci--4081.3.6--n--1070eafa86-k8s-csi--node--driver--gkvkn-eth0" Mar 7 01:17:57.302966 containerd[1718]: 2026-03-07 01:17:57.300 [INFO][6569] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:57.302966 containerd[1718]: 2026-03-07 01:17:57.301 [INFO][6562] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0" Mar 7 01:17:57.303735 containerd[1718]: time="2026-03-07T01:17:57.303027251Z" level=info msg="TearDown network for sandbox \"2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0\" successfully" Mar 7 01:17:57.314558 containerd[1718]: time="2026-03-07T01:17:57.314492829Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:17:57.314721 containerd[1718]: time="2026-03-07T01:17:57.314590131Z" level=info msg="RemovePodSandbox \"2a1512a0555e3c64e2a67503697698eb27d9fe436ecea1c8cd73d7219a2d13f0\" returns successfully" Mar 7 01:17:57.315237 containerd[1718]: time="2026-03-07T01:17:57.315207040Z" level=info msg="StopPodSandbox for \"21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b\"" Mar 7 01:17:57.382642 containerd[1718]: 2026-03-07 01:17:57.348 [WARNING][6584] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--vk5ds-eth0", GenerateName:"calico-apiserver-5c79b8d867-", Namespace:"calico-system", SelfLink:"", UID:"7e3699ed-bb54-40b3-937a-980b33b5dfb7", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c79b8d867", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-1070eafa86", ContainerID:"e91e7ef5225e466f9d39a83518346e632301a80d9c292a7879b326d8fa587ff2", Pod:"calico-apiserver-5c79b8d867-vk5ds", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali661679d389c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:57.382642 containerd[1718]: 2026-03-07 01:17:57.348 [INFO][6584] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b" Mar 7 01:17:57.382642 containerd[1718]: 2026-03-07 01:17:57.348 [INFO][6584] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b" iface="eth0" netns="" Mar 7 01:17:57.382642 containerd[1718]: 2026-03-07 01:17:57.348 [INFO][6584] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b" Mar 7 01:17:57.382642 containerd[1718]: 2026-03-07 01:17:57.348 [INFO][6584] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b" Mar 7 01:17:57.382642 containerd[1718]: 2026-03-07 01:17:57.370 [INFO][6591] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b" HandleID="k8s-pod-network.21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b" Workload="ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--vk5ds-eth0" Mar 7 01:17:57.382642 containerd[1718]: 2026-03-07 01:17:57.370 [INFO][6591] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:57.382642 containerd[1718]: 2026-03-07 01:17:57.370 [INFO][6591] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:57.382642 containerd[1718]: 2026-03-07 01:17:57.378 [WARNING][6591] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b" HandleID="k8s-pod-network.21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b" Workload="ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--vk5ds-eth0" Mar 7 01:17:57.382642 containerd[1718]: 2026-03-07 01:17:57.378 [INFO][6591] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b" HandleID="k8s-pod-network.21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b" Workload="ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--vk5ds-eth0" Mar 7 01:17:57.382642 containerd[1718]: 2026-03-07 01:17:57.380 [INFO][6591] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:57.382642 containerd[1718]: 2026-03-07 01:17:57.381 [INFO][6584] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b" Mar 7 01:17:57.383861 containerd[1718]: time="2026-03-07T01:17:57.382686887Z" level=info msg="TearDown network for sandbox \"21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b\" successfully" Mar 7 01:17:57.383861 containerd[1718]: time="2026-03-07T01:17:57.382719288Z" level=info msg="StopPodSandbox for \"21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b\" returns successfully" Mar 7 01:17:57.383861 containerd[1718]: time="2026-03-07T01:17:57.383245696Z" level=info msg="RemovePodSandbox for \"21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b\"" Mar 7 01:17:57.383861 containerd[1718]: time="2026-03-07T01:17:57.383272997Z" level=info msg="Forcibly stopping sandbox \"21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b\"" Mar 7 01:17:57.451440 containerd[1718]: 2026-03-07 01:17:57.419 [WARNING][6605] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--vk5ds-eth0", GenerateName:"calico-apiserver-5c79b8d867-", Namespace:"calico-system", SelfLink:"", UID:"7e3699ed-bb54-40b3-937a-980b33b5dfb7", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c79b8d867", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-1070eafa86", ContainerID:"e91e7ef5225e466f9d39a83518346e632301a80d9c292a7879b326d8fa587ff2", Pod:"calico-apiserver-5c79b8d867-vk5ds", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali661679d389c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:57.451440 containerd[1718]: 2026-03-07 01:17:57.419 [INFO][6605] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b" Mar 7 01:17:57.451440 containerd[1718]: 2026-03-07 01:17:57.419 [INFO][6605] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b" iface="eth0" netns="" Mar 7 01:17:57.451440 containerd[1718]: 2026-03-07 01:17:57.419 [INFO][6605] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b" Mar 7 01:17:57.451440 containerd[1718]: 2026-03-07 01:17:57.419 [INFO][6605] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b" Mar 7 01:17:57.451440 containerd[1718]: 2026-03-07 01:17:57.440 [INFO][6612] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b" HandleID="k8s-pod-network.21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b" Workload="ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--vk5ds-eth0" Mar 7 01:17:57.451440 containerd[1718]: 2026-03-07 01:17:57.441 [INFO][6612] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:57.451440 containerd[1718]: 2026-03-07 01:17:57.441 [INFO][6612] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:57.451440 containerd[1718]: 2026-03-07 01:17:57.447 [WARNING][6612] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b" HandleID="k8s-pod-network.21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b" Workload="ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--vk5ds-eth0" Mar 7 01:17:57.451440 containerd[1718]: 2026-03-07 01:17:57.447 [INFO][6612] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b" HandleID="k8s-pod-network.21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b" Workload="ci--4081.3.6--n--1070eafa86-k8s-calico--apiserver--5c79b8d867--vk5ds-eth0" Mar 7 01:17:57.451440 containerd[1718]: 2026-03-07 01:17:57.448 [INFO][6612] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:57.451440 containerd[1718]: 2026-03-07 01:17:57.450 [INFO][6605] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b" Mar 7 01:17:57.451440 containerd[1718]: time="2026-03-07T01:17:57.451411054Z" level=info msg="TearDown network for sandbox \"21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b\" successfully" Mar 7 01:17:57.462143 containerd[1718]: time="2026-03-07T01:17:57.462064719Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:17:57.462333 containerd[1718]: time="2026-03-07T01:17:57.462173621Z" level=info msg="RemovePodSandbox \"21674903a812635c508587745e1fb6112613e3cbfe89c67c6e1cb9581b21f58b\" returns successfully" Mar 7 01:17:57.462725 containerd[1718]: time="2026-03-07T01:17:57.462697129Z" level=info msg="StopPodSandbox for \"4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae\"" Mar 7 01:17:57.529856 containerd[1718]: 2026-03-07 01:17:57.497 [WARNING][6626] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--8rgcl-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a2c57fb9-f299-436c-8599-34cdb3343b9a", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-1070eafa86", ContainerID:"59704b3f1b9c55764d4d77f606efd28b1b90128a029b1af85e86a4d8d5fee316", Pod:"coredns-66bc5c9577-8rgcl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibb4a1f0d3ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:57.529856 containerd[1718]: 2026-03-07 01:17:57.497 [INFO][6626] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae" Mar 7 01:17:57.529856 containerd[1718]: 2026-03-07 01:17:57.497 [INFO][6626] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae" iface="eth0" netns="" Mar 7 01:17:57.529856 containerd[1718]: 2026-03-07 01:17:57.497 [INFO][6626] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae" Mar 7 01:17:57.529856 containerd[1718]: 2026-03-07 01:17:57.497 [INFO][6626] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae" Mar 7 01:17:57.529856 containerd[1718]: 2026-03-07 01:17:57.518 [INFO][6633] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae" HandleID="k8s-pod-network.4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae" Workload="ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--8rgcl-eth0" Mar 7 01:17:57.529856 containerd[1718]: 2026-03-07 01:17:57.518 [INFO][6633] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:57.529856 containerd[1718]: 2026-03-07 01:17:57.519 [INFO][6633] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:57.529856 containerd[1718]: 2026-03-07 01:17:57.525 [WARNING][6633] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae" HandleID="k8s-pod-network.4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae" Workload="ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--8rgcl-eth0" Mar 7 01:17:57.529856 containerd[1718]: 2026-03-07 01:17:57.525 [INFO][6633] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae" HandleID="k8s-pod-network.4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae" Workload="ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--8rgcl-eth0" Mar 7 01:17:57.529856 containerd[1718]: 2026-03-07 01:17:57.527 [INFO][6633] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:57.529856 containerd[1718]: 2026-03-07 01:17:57.528 [INFO][6626] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae" Mar 7 01:17:57.531154 containerd[1718]: time="2026-03-07T01:17:57.529902672Z" level=info msg="TearDown network for sandbox \"4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae\" successfully" Mar 7 01:17:57.531154 containerd[1718]: time="2026-03-07T01:17:57.529933272Z" level=info msg="StopPodSandbox for \"4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae\" returns successfully" Mar 7 01:17:57.531154 containerd[1718]: time="2026-03-07T01:17:57.530541682Z" level=info msg="RemovePodSandbox for \"4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae\"" Mar 7 01:17:57.531154 containerd[1718]: time="2026-03-07T01:17:57.530576782Z" level=info msg="Forcibly stopping sandbox \"4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae\"" Mar 7 01:17:57.624688 containerd[1718]: 2026-03-07 01:17:57.566 [WARNING][6647] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--8rgcl-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a2c57fb9-f299-436c-8599-34cdb3343b9a", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-1070eafa86", ContainerID:"59704b3f1b9c55764d4d77f606efd28b1b90128a029b1af85e86a4d8d5fee316", Pod:"coredns-66bc5c9577-8rgcl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibb4a1f0d3ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:57.624688 containerd[1718]: 2026-03-07 01:17:57.566 [INFO][6647] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae" Mar 7 01:17:57.624688 containerd[1718]: 2026-03-07 01:17:57.566 [INFO][6647] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae" iface="eth0" netns="" Mar 7 01:17:57.624688 containerd[1718]: 2026-03-07 01:17:57.566 [INFO][6647] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae" Mar 7 01:17:57.624688 containerd[1718]: 2026-03-07 01:17:57.566 [INFO][6647] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae" Mar 7 01:17:57.624688 containerd[1718]: 2026-03-07 01:17:57.594 [INFO][6654] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae" HandleID="k8s-pod-network.4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae" Workload="ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--8rgcl-eth0" Mar 7 01:17:57.624688 containerd[1718]: 2026-03-07 01:17:57.594 [INFO][6654] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:57.624688 containerd[1718]: 2026-03-07 01:17:57.594 [INFO][6654] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:57.624688 containerd[1718]: 2026-03-07 01:17:57.615 [WARNING][6654] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae" HandleID="k8s-pod-network.4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae" Workload="ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--8rgcl-eth0" Mar 7 01:17:57.624688 containerd[1718]: 2026-03-07 01:17:57.615 [INFO][6654] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae" HandleID="k8s-pod-network.4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae" Workload="ci--4081.3.6--n--1070eafa86-k8s-coredns--66bc5c9577--8rgcl-eth0" Mar 7 01:17:57.624688 containerd[1718]: 2026-03-07 01:17:57.619 [INFO][6654] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:57.624688 containerd[1718]: 2026-03-07 01:17:57.622 [INFO][6647] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae" Mar 7 01:17:57.626227 containerd[1718]: time="2026-03-07T01:17:57.625452055Z" level=info msg="TearDown network for sandbox \"4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae\" successfully" Mar 7 01:17:57.634967 containerd[1718]: time="2026-03-07T01:17:57.634788899Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:17:57.634967 containerd[1718]: time="2026-03-07T01:17:57.634862501Z" level=info msg="RemovePodSandbox \"4b431867e9e3785159e8e547a77cb5f89374dc29bb9e9b5d76f418365867eaae\" returns successfully" Mar 7 01:17:59.010426 systemd[1]: Started sshd@19-10.200.8.30:22-10.200.16.10:45264.service - OpenSSH per-connection server daemon (10.200.16.10:45264). Mar 7 01:17:59.635131 sshd[6664]: Accepted publickey for core from 10.200.16.10 port 45264 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:17:59.636212 sshd[6664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:17:59.641196 systemd-logind[1702]: New session 22 of user core. Mar 7 01:17:59.648293 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 7 01:18:00.131643 sshd[6664]: pam_unix(sshd:session): session closed for user core Mar 7 01:18:00.136173 systemd-logind[1702]: Session 22 logged out. Waiting for processes to exit. Mar 7 01:18:00.136868 systemd[1]: sshd@19-10.200.8.30:22-10.200.16.10:45264.service: Deactivated successfully. Mar 7 01:18:00.140681 systemd[1]: session-22.scope: Deactivated successfully. Mar 7 01:18:00.141729 systemd-logind[1702]: Removed session 22. Mar 7 01:18:01.500517 systemd[1]: run-containerd-runc-k8s.io-2406ae0f18733cb66b53399c6cd82e925055992fa74c6787c4b5a062b8b12479-runc.dvsHua.mount: Deactivated successfully. Mar 7 01:18:05.249402 systemd[1]: Started sshd@20-10.200.8.30:22-10.200.16.10:48150.service - OpenSSH per-connection server daemon (10.200.16.10:48150). Mar 7 01:18:05.872150 sshd[6698]: Accepted publickey for core from 10.200.16.10 port 48150 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:18:05.873523 sshd[6698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:18:05.878426 systemd-logind[1702]: New session 23 of user core. Mar 7 01:18:05.883270 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 7 01:18:06.370464 sshd[6698]: pam_unix(sshd:session): session closed for user core Mar 7 01:18:06.373595 systemd-logind[1702]: Session 23 logged out. Waiting for processes to exit. Mar 7 01:18:06.373971 systemd[1]: sshd@20-10.200.8.30:22-10.200.16.10:48150.service: Deactivated successfully. Mar 7 01:18:06.376748 systemd[1]: session-23.scope: Deactivated successfully. Mar 7 01:18:06.379003 systemd-logind[1702]: Removed session 23. Mar 7 01:18:11.485433 systemd[1]: Started sshd@21-10.200.8.30:22-10.200.16.10:42560.service - OpenSSH per-connection server daemon (10.200.16.10:42560). Mar 7 01:18:12.114415 sshd[6721]: Accepted publickey for core from 10.200.16.10 port 42560 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:18:12.118124 sshd[6721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:18:12.130462 systemd-logind[1702]: New session 24 of user core. Mar 7 01:18:12.134852 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 7 01:18:12.661940 sshd[6721]: pam_unix(sshd:session): session closed for user core Mar 7 01:18:12.669440 systemd[1]: sshd@21-10.200.8.30:22-10.200.16.10:42560.service: Deactivated successfully. Mar 7 01:18:12.671359 systemd-logind[1702]: Session 24 logged out. Waiting for processes to exit. Mar 7 01:18:12.672849 systemd[1]: session-24.scope: Deactivated successfully. Mar 7 01:18:12.676229 systemd-logind[1702]: Removed session 24. Mar 7 01:18:17.779432 systemd[1]: Started sshd@22-10.200.8.30:22-10.200.16.10:42572.service - OpenSSH per-connection server daemon (10.200.16.10:42572). Mar 7 01:18:18.400386 sshd[6754]: Accepted publickey for core from 10.200.16.10 port 42572 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:18:18.401997 sshd[6754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:18:18.406021 systemd-logind[1702]: New session 25 of user core. Mar 7 01:18:18.410425 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 7 01:18:18.932524 sshd[6754]: pam_unix(sshd:session): session closed for user core Mar 7 01:18:18.936471 systemd-logind[1702]: Session 25 logged out. Waiting for processes to exit. Mar 7 01:18:18.937404 systemd[1]: sshd@22-10.200.8.30:22-10.200.16.10:42572.service: Deactivated successfully. Mar 7 01:18:18.939699 systemd[1]: session-25.scope: Deactivated successfully. Mar 7 01:18:18.940850 systemd-logind[1702]: Removed session 25. Mar 7 01:18:24.058421 systemd[1]: Started sshd@23-10.200.8.30:22-10.200.16.10:54962.service - OpenSSH per-connection server daemon (10.200.16.10:54962). Mar 7 01:18:24.685838 sshd[6825]: Accepted publickey for core from 10.200.16.10 port 54962 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:18:24.687513 sshd[6825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:18:24.692305 systemd-logind[1702]: New session 26 of user core. Mar 7 01:18:24.695264 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 7 01:18:25.187882 sshd[6825]: pam_unix(sshd:session): session closed for user core Mar 7 01:18:25.192784 systemd[1]: sshd@23-10.200.8.30:22-10.200.16.10:54962.service: Deactivated successfully. Mar 7 01:18:25.194167 systemd-logind[1702]: Session 26 logged out. Waiting for processes to exit. Mar 7 01:18:25.196049 systemd[1]: session-26.scope: Deactivated successfully. Mar 7 01:18:25.202353 systemd-logind[1702]: Removed session 26.