Apr 24 23:54:52.120748 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Apr 24 22:11:38 -00 2026 Apr 24 23:54:52.120774 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:54:52.120786 kernel: BIOS-provided physical RAM map: Apr 24 23:54:52.120792 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 24 23:54:52.120798 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Apr 24 23:54:52.120805 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000000437dfff] usable Apr 24 23:54:52.120812 kernel: BIOS-e820: [mem 0x000000000437e000-0x000000000477dfff] reserved Apr 24 23:54:52.120823 kernel: BIOS-e820: [mem 0x000000000477e000-0x000000003ff1efff] usable Apr 24 23:54:52.120832 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ff73fff] type 20 Apr 24 23:54:52.120839 kernel: BIOS-e820: [mem 0x000000003ff74000-0x000000003ffc8fff] reserved Apr 24 23:54:52.120848 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Apr 24 23:54:52.120856 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Apr 24 23:54:52.120862 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Apr 24 23:54:52.120871 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Apr 24 23:54:52.120883 kernel: printk: bootconsole [earlyser0] enabled Apr 24 23:54:52.120891 kernel: NX (Execute Disable) protection: active Apr 24 23:54:52.120902 kernel: APIC: Static calls initialized Apr 24 23:54:52.120909 kernel: efi: EFI v2.7 by Microsoft Apr 24 23:54:52.120919 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3f420518 Apr 24 23:54:52.120927 kernel: SMBIOS 3.1.0 present. Apr 24 23:54:52.120934 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/08/2026 Apr 24 23:54:52.120941 kernel: Hypervisor detected: Microsoft Hyper-V Apr 24 23:54:52.120953 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Apr 24 23:54:52.120960 kernel: Hyper-V: Host Build 10.0.26102.1277-1-0 Apr 24 23:54:52.120969 kernel: Hyper-V: Nested features: 0x1e0101 Apr 24 23:54:52.120979 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Apr 24 23:54:52.120986 kernel: Hyper-V: Using hypercall for remote TLB flush Apr 24 23:54:52.120993 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Apr 24 23:54:52.121005 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Apr 24 23:54:52.121013 kernel: tsc: Marking TSC unstable due to running on Hyper-V Apr 24 23:54:52.121022 kernel: tsc: Detected 2593.907 MHz processor Apr 24 23:54:52.121032 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 24 23:54:52.121039 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 24 23:54:52.121051 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Apr 24 23:54:52.121060 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 24 23:54:52.121067 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 24 23:54:52.121074 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Apr 24 23:54:52.121085 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Apr 24 23:54:52.121092 kernel: Using GB pages for direct mapping Apr 24 23:54:52.121100 kernel: Secure boot disabled Apr 24 23:54:52.121115 kernel: ACPI: Early table checksum verification disabled Apr 24 23:54:52.121127 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Apr 24 23:54:52.121136 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 24 23:54:52.121144 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 24 23:54:52.121156 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Apr 24 23:54:52.121164 kernel: ACPI: FACS 0x000000003FFFE000 000040 Apr 24 23:54:52.121174 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 24 23:54:52.121184 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 24 23:54:52.121194 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 24 23:54:52.121206 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 24 23:54:52.121213 kernel: ACPI: SRAT 0x000000003FFD4000 0001E0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 24 23:54:52.121223 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 24 23:54:52.121233 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Apr 24 23:54:52.121240 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Apr 24 23:54:52.121252 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Apr 24 23:54:52.121260 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Apr 24 23:54:52.121268 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Apr 24 23:54:52.121284 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Apr 24 23:54:52.121291 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Apr 24 23:54:52.121299 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd41df] Apr 24 23:54:52.121306 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Apr 24 23:54:52.121314 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 24 23:54:52.121324 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 24 23:54:52.121333 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Apr 24 23:54:52.121341 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Apr 24 23:54:52.121348 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Apr 24 23:54:52.121362 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Apr 24 23:54:52.121370 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Apr 24 23:54:52.121380 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Apr 24 23:54:52.121411 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Apr 24 23:54:52.121418 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Apr 24 23:54:52.121431 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Apr 24 23:54:52.121439 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Apr 24 23:54:52.121449 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Apr 24 23:54:52.121461 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Apr 24 23:54:52.121469 kernel: Zone ranges: Apr 24 23:54:52.121481 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 24 23:54:52.121489 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 24 23:54:52.121496 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Apr 24 23:54:52.121504 kernel: Movable zone start for each node Apr 24 23:54:52.121515 kernel: Early memory node ranges Apr 24 23:54:52.121523 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 24 23:54:52.121533 kernel: node 0: [mem 0x0000000000100000-0x000000000437dfff] Apr 24 23:54:52.121544 kernel: node 0: [mem 0x000000000477e000-0x000000003ff1efff] Apr 24 23:54:52.121552 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Apr 24 23:54:52.121564 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Apr 24 23:54:52.121571 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Apr 24 23:54:52.121582 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 24 23:54:52.121591 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 24 23:54:52.121603 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Apr 24 23:54:52.121611 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Apr 24 23:54:52.121619 kernel: ACPI: PM-Timer IO Port: 0x408 Apr 24 23:54:52.121633 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Apr 24 23:54:52.121641 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Apr 24 23:54:52.121653 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 24 23:54:52.121660 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 24 23:54:52.121671 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Apr 24 23:54:52.121680 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 24 23:54:52.121688 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Apr 24 23:54:52.121700 kernel: Booting paravirtualized kernel on Hyper-V Apr 24 23:54:52.121707 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 24 23:54:52.121722 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 24 23:54:52.121731 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 24 23:54:52.121738 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 24 23:54:52.121748 kernel: pcpu-alloc: [0] 0 1 Apr 24 23:54:52.121757 kernel: Hyper-V: PV spinlocks enabled Apr 24 23:54:52.121764 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 24 23:54:52.121777 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:54:52.121785 kernel: random: crng init done Apr 24 23:54:52.121800 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Apr 24 23:54:52.121807 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 24 23:54:52.121815 kernel: Fallback order for Node 0: 0 Apr 24 23:54:52.121826 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2061321 Apr 24 23:54:52.121835 kernel: Policy zone: Normal Apr 24 23:54:52.121845 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 24 23:54:52.121855 kernel: software IO TLB: area num 2. Apr 24 23:54:52.121863 kernel: Memory: 8061212K/8383228K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 321756K reserved, 0K cma-reserved) Apr 24 23:54:52.121875 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 24 23:54:52.121891 kernel: ftrace: allocating 37996 entries in 149 pages Apr 24 23:54:52.121904 kernel: ftrace: allocated 149 pages with 4 groups Apr 24 23:54:52.121912 kernel: Dynamic Preempt: voluntary Apr 24 23:54:52.121927 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 24 23:54:52.121936 kernel: rcu: RCU event tracing is enabled. Apr 24 23:54:52.121949 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 24 23:54:52.121957 kernel: Trampoline variant of Tasks RCU enabled. Apr 24 23:54:52.121965 kernel: Rude variant of Tasks RCU enabled. Apr 24 23:54:52.121977 kernel: Tracing variant of Tasks RCU enabled. Apr 24 23:54:52.121989 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 24 23:54:52.122000 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 24 23:54:52.122010 kernel: Using NULL legacy PIC Apr 24 23:54:52.122023 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Apr 24 23:54:52.122031 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 24 23:54:52.122043 kernel: Console: colour dummy device 80x25 Apr 24 23:54:52.122052 kernel: printk: console [tty1] enabled Apr 24 23:54:52.122060 kernel: printk: console [ttyS0] enabled Apr 24 23:54:52.122074 kernel: printk: bootconsole [earlyser0] disabled Apr 24 23:54:52.122082 kernel: ACPI: Core revision 20230628 Apr 24 23:54:52.122095 kernel: Failed to register legacy timer interrupt Apr 24 23:54:52.122103 kernel: APIC: Switch to symmetric I/O mode setup Apr 24 23:54:52.122112 kernel: Hyper-V: enabling crash_kexec_post_notifiers Apr 24 23:54:52.122124 kernel: Hyper-V: Using IPI hypercalls Apr 24 23:54:52.122132 kernel: APIC: send_IPI() replaced with hv_send_ipi() Apr 24 23:54:52.122143 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Apr 24 23:54:52.122153 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Apr 24 23:54:52.122166 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Apr 24 23:54:52.122176 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Apr 24 23:54:52.122183 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Apr 24 23:54:52.122195 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Apr 24 23:54:52.122204 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 24 23:54:52.122213 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 24 23:54:52.122225 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 24 23:54:52.122233 kernel: Spectre V2 : Mitigation: Retpolines Apr 24 23:54:52.122245 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 24 23:54:52.122253 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 24 23:54:52.122268 kernel: RETBleed: Vulnerable Apr 24 23:54:52.122276 kernel: Speculative Store Bypass: Vulnerable Apr 24 23:54:52.122286 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Apr 24 23:54:52.122296 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 24 23:54:52.122304 kernel: active return thunk: its_return_thunk Apr 24 23:54:52.122315 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 24 23:54:52.122324 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 24 23:54:52.122332 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 24 23:54:52.122353 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 24 23:54:52.122363 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 24 23:54:52.122375 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 24 23:54:52.124424 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 24 23:54:52.124448 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 24 23:54:52.124464 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 24 23:54:52.124479 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 24 23:54:52.124494 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 24 23:54:52.124508 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 24 23:54:52.124523 kernel: Freeing SMP alternatives memory: 32K Apr 24 23:54:52.124537 kernel: pid_max: default: 32768 minimum: 301 Apr 24 23:54:52.124551 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 24 23:54:52.124565 kernel: landlock: Up and running. Apr 24 23:54:52.124578 kernel: SELinux: Initializing. Apr 24 23:54:52.124598 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 24 23:54:52.124612 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 24 23:54:52.124626 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Apr 24 23:54:52.124639 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 23:54:52.124653 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 23:54:52.124667 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 23:54:52.124681 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Apr 24 23:54:52.124695 kernel: signal: max sigframe size: 3632 Apr 24 23:54:52.124708 kernel: rcu: Hierarchical SRCU implementation. Apr 24 23:54:52.124726 kernel: rcu: Max phase no-delay instances is 400. Apr 24 23:54:52.124741 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 24 23:54:52.124756 kernel: smp: Bringing up secondary CPUs ... Apr 24 23:54:52.124770 kernel: smpboot: x86: Booting SMP configuration: Apr 24 23:54:52.124783 kernel: .... node #0, CPUs: #1 Apr 24 23:54:52.124799 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Apr 24 23:54:52.124815 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 24 23:54:52.124830 kernel: smp: Brought up 1 node, 2 CPUs Apr 24 23:54:52.124844 kernel: smpboot: Max logical packages: 1 Apr 24 23:54:52.124862 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Apr 24 23:54:52.124876 kernel: devtmpfs: initialized Apr 24 23:54:52.124891 kernel: x86/mm: Memory block size: 128MB Apr 24 23:54:52.124906 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Apr 24 23:54:52.124920 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 24 23:54:52.124935 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 24 23:54:52.124949 kernel: pinctrl core: initialized pinctrl subsystem Apr 24 23:54:52.124963 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 24 23:54:52.124978 kernel: audit: initializing netlink subsys (disabled) Apr 24 23:54:52.124995 kernel: audit: type=2000 audit(1777074890.030:1): state=initialized audit_enabled=0 res=1 Apr 24 23:54:52.125009 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 24 23:54:52.125023 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 24 23:54:52.125037 kernel: cpuidle: using governor menu Apr 24 23:54:52.125052 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 24 23:54:52.125067 kernel: dca service started, version 1.12.1 Apr 24 23:54:52.125081 kernel: e820: reserve RAM buffer [mem 0x0437e000-0x07ffffff] Apr 24 23:54:52.125096 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Apr 24 23:54:52.125110 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 24 23:54:52.125128 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 24 23:54:52.125143 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 24 23:54:52.125157 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 24 23:54:52.125172 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 24 23:54:52.125187 kernel: ACPI: Added _OSI(Module Device) Apr 24 23:54:52.125202 kernel: ACPI: Added _OSI(Processor Device) Apr 24 23:54:52.125216 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 24 23:54:52.125231 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 24 23:54:52.125249 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 24 23:54:52.125264 kernel: ACPI: Interpreter enabled Apr 24 23:54:52.125279 kernel: ACPI: PM: (supports S0 S5) Apr 24 23:54:52.125294 kernel: ACPI: Using IOAPIC for interrupt routing Apr 24 23:54:52.125309 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 24 23:54:52.125325 kernel: PCI: Ignoring E820 reservations for host bridge windows Apr 24 23:54:52.125340 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Apr 24 23:54:52.125355 kernel: iommu: Default domain type: Translated Apr 24 23:54:52.125371 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 24 23:54:52.125406 kernel: efivars: Registered efivars operations Apr 24 23:54:52.125425 kernel: PCI: Using ACPI for IRQ routing Apr 24 23:54:52.125440 kernel: PCI: System does not support PCI Apr 24 23:54:52.125454 kernel: vgaarb: loaded Apr 24 23:54:52.125469 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Apr 24 23:54:52.125484 kernel: VFS: Disk quotas dquot_6.6.0 Apr 24 23:54:52.125498 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 24 23:54:52.125513 kernel: pnp: PnP ACPI init Apr 24 23:54:52.125529 kernel: pnp: PnP ACPI: found 3 devices Apr 24 23:54:52.125543 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 24 23:54:52.125561 kernel: NET: Registered PF_INET protocol family Apr 24 23:54:52.125576 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 24 23:54:52.125591 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Apr 24 23:54:52.125603 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 24 23:54:52.125618 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 24 23:54:52.125631 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Apr 24 23:54:52.125646 kernel: TCP: Hash tables configured (established 65536 bind 65536) Apr 24 23:54:52.125662 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 24 23:54:52.125675 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 24 23:54:52.125694 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 24 23:54:52.125708 kernel: NET: Registered PF_XDP protocol family Apr 24 23:54:52.125724 kernel: PCI: CLS 0 bytes, default 64 Apr 24 23:54:52.125739 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 24 23:54:52.125754 kernel: software IO TLB: mapped [mem 0x000000003a878000-0x000000003e878000] (64MB) Apr 24 23:54:52.125768 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 24 23:54:52.125782 kernel: Initialise system trusted keyrings Apr 24 23:54:52.125797 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Apr 24 23:54:52.125815 kernel: Key type asymmetric registered Apr 24 23:54:52.125829 kernel: Asymmetric key parser 'x509' registered Apr 24 23:54:52.125842 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 24 23:54:52.125856 kernel: io scheduler mq-deadline registered Apr 24 23:54:52.125870 kernel: io scheduler kyber registered Apr 24 23:54:52.125885 kernel: io scheduler bfq registered Apr 24 23:54:52.125900 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 24 23:54:52.125915 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 24 23:54:52.125930 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 24 23:54:52.125944 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Apr 24 23:54:52.125963 kernel: i8042: PNP: No PS/2 controller found. Apr 24 23:54:52.126158 kernel: rtc_cmos 00:02: registered as rtc0 Apr 24 23:54:52.126304 kernel: rtc_cmos 00:02: setting system clock to 2026-04-24T23:54:51 UTC (1777074891) Apr 24 23:54:52.126559 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Apr 24 23:54:52.126582 kernel: intel_pstate: CPU model not supported Apr 24 23:54:52.126596 kernel: efifb: probing for efifb Apr 24 23:54:52.126610 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Apr 24 23:54:52.126630 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Apr 24 23:54:52.126644 kernel: efifb: scrolling: redraw Apr 24 23:54:52.126659 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 24 23:54:52.126672 kernel: Console: switching to colour frame buffer device 128x48 Apr 24 23:54:52.126687 kernel: fb0: EFI VGA frame buffer device Apr 24 23:54:52.126704 kernel: pstore: Using crash dump compression: deflate Apr 24 23:54:52.126718 kernel: pstore: Registered efi_pstore as persistent store backend Apr 24 23:54:52.126733 kernel: NET: Registered PF_INET6 protocol family Apr 24 23:54:52.126748 kernel: Segment Routing with IPv6 Apr 24 23:54:52.126767 kernel: In-situ OAM (IOAM) with IPv6 Apr 24 23:54:52.126782 kernel: NET: Registered PF_PACKET protocol family Apr 24 23:54:52.126797 kernel: Key type dns_resolver registered Apr 24 23:54:52.126812 kernel: IPI shorthand broadcast: enabled Apr 24 23:54:52.126827 kernel: sched_clock: Marking stable (871002700, 54754400)->(1190490400, -264733300) Apr 24 23:54:52.126843 kernel: registered taskstats version 1 Apr 24 23:54:52.126858 kernel: Loading compiled-in X.509 certificates Apr 24 23:54:52.126873 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 507f116e6718ec7535b55c873de10edf9b6fe124' Apr 24 23:54:52.126888 kernel: Key type .fscrypt registered Apr 24 23:54:52.126906 kernel: Key type fscrypt-provisioning registered Apr 24 23:54:52.126920 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 24 23:54:52.126935 kernel: ima: Allocated hash algorithm: sha1 Apr 24 23:54:52.126950 kernel: ima: No architecture policies found Apr 24 23:54:52.126966 kernel: clk: Disabling unused clocks Apr 24 23:54:52.126981 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 24 23:54:52.126996 kernel: Write protecting the kernel read-only data: 36864k Apr 24 23:54:52.127011 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 24 23:54:52.127026 kernel: Run /init as init process Apr 24 23:54:52.127044 kernel: with arguments: Apr 24 23:54:52.127059 kernel: /init Apr 24 23:54:52.127074 kernel: with environment: Apr 24 23:54:52.127089 kernel: HOME=/ Apr 24 23:54:52.127103 kernel: TERM=linux Apr 24 23:54:52.127121 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 24 23:54:52.127139 systemd[1]: Detected virtualization microsoft. Apr 24 23:54:52.127156 systemd[1]: Detected architecture x86-64. Apr 24 23:54:52.127174 systemd[1]: Running in initrd. Apr 24 23:54:52.127189 systemd[1]: No hostname configured, using default hostname. Apr 24 23:54:52.127204 systemd[1]: Hostname set to . Apr 24 23:54:52.127220 systemd[1]: Initializing machine ID from random generator. Apr 24 23:54:52.127236 systemd[1]: Queued start job for default target initrd.target. Apr 24 23:54:52.127251 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:54:52.127267 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:54:52.127284 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 24 23:54:52.127303 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 23:54:52.127318 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 24 23:54:52.127334 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 24 23:54:52.127353 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 24 23:54:52.127369 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 24 23:54:52.127398 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:54:52.127412 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:54:52.127438 systemd[1]: Reached target paths.target - Path Units. Apr 24 23:54:52.127451 systemd[1]: Reached target slices.target - Slice Units. Apr 24 23:54:52.127464 systemd[1]: Reached target swap.target - Swaps. Apr 24 23:54:52.127477 systemd[1]: Reached target timers.target - Timer Units. Apr 24 23:54:52.127492 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 23:54:52.127508 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 23:54:52.127524 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 24 23:54:52.127539 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 24 23:54:52.127555 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:54:52.127574 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 23:54:52.127590 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:54:52.127606 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 23:54:52.127622 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 24 23:54:52.127638 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 23:54:52.127654 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 24 23:54:52.127670 systemd[1]: Starting systemd-fsck-usr.service... Apr 24 23:54:52.127687 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 23:54:52.127705 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 23:54:52.127746 systemd-journald[177]: Collecting audit messages is disabled. Apr 24 23:54:52.127782 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:54:52.127798 systemd-journald[177]: Journal started Apr 24 23:54:52.127834 systemd-journald[177]: Runtime Journal (/run/log/journal/2fffbde258e84252a73eaa97eb29f100) is 8.0M, max 158.7M, 150.7M free. Apr 24 23:54:52.139405 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 23:54:52.143755 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 24 23:54:52.147568 systemd-modules-load[178]: Inserted module 'overlay' Apr 24 23:54:52.153610 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:54:52.161293 systemd[1]: Finished systemd-fsck-usr.service. Apr 24 23:54:52.163971 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:54:52.181556 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:54:52.190570 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 24 23:54:52.199511 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 23:54:52.214604 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 24 23:54:52.221164 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:54:52.231123 systemd-modules-load[178]: Inserted module 'br_netfilter' Apr 24 23:54:52.234414 kernel: Bridge firewalling registered Apr 24 23:54:52.236552 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 23:54:52.236928 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 23:54:52.237336 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:54:52.238307 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:54:52.278076 dracut-cmdline[209]: dracut-dracut-053 Apr 24 23:54:52.278076 dracut-cmdline[209]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:54:52.243530 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 24 23:54:52.244517 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 23:54:52.267818 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:54:52.279561 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:54:52.317603 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 23:54:52.357226 systemd-resolved[245]: Positive Trust Anchors: Apr 24 23:54:52.357243 systemd-resolved[245]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 23:54:52.357307 systemd-resolved[245]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 23:54:52.387475 systemd-resolved[245]: Defaulting to hostname 'linux'. Apr 24 23:54:52.391525 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 23:54:52.400761 kernel: SCSI subsystem initialized Apr 24 23:54:52.400936 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:54:52.411402 kernel: Loading iSCSI transport class v2.0-870. Apr 24 23:54:52.423403 kernel: iscsi: registered transport (tcp) Apr 24 23:54:52.444310 kernel: iscsi: registered transport (qla4xxx) Apr 24 23:54:52.444399 kernel: QLogic iSCSI HBA Driver Apr 24 23:54:52.481315 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 24 23:54:52.491600 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 24 23:54:52.522671 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 24 23:54:52.522742 kernel: device-mapper: uevent: version 1.0.3 Apr 24 23:54:52.527402 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 24 23:54:52.566403 kernel: raid6: avx512x4 gen() 18322 MB/s Apr 24 23:54:52.586401 kernel: raid6: avx512x2 gen() 18339 MB/s Apr 24 23:54:52.605396 kernel: raid6: avx512x1 gen() 18304 MB/s Apr 24 23:54:52.624398 kernel: raid6: avx2x4 gen() 18174 MB/s Apr 24 23:54:52.644399 kernel: raid6: avx2x2 gen() 18036 MB/s Apr 24 23:54:52.664932 kernel: raid6: avx2x1 gen() 13753 MB/s Apr 24 23:54:52.664968 kernel: raid6: using algorithm avx512x2 gen() 18339 MB/s Apr 24 23:54:52.686993 kernel: raid6: .... xor() 30368 MB/s, rmw enabled Apr 24 23:54:52.687024 kernel: raid6: using avx512x2 recovery algorithm Apr 24 23:54:52.710407 kernel: xor: automatically using best checksumming function avx Apr 24 23:54:52.859425 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 24 23:54:52.869375 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 24 23:54:52.881581 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:54:52.896360 systemd-udevd[399]: Using default interface naming scheme 'v255'. Apr 24 23:54:52.901067 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:54:52.916539 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 24 23:54:52.932812 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Apr 24 23:54:52.961423 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 23:54:52.970640 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 23:54:53.015417 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:54:53.027679 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 24 23:54:53.050605 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 24 23:54:53.055635 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 23:54:53.063183 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:54:53.070784 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 23:54:53.088536 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 24 23:54:53.118225 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 24 23:54:53.122081 kernel: cryptd: max_cpu_qlen set to 1000 Apr 24 23:54:53.130786 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 24 23:54:53.130934 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:54:53.134904 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:54:53.138294 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:54:53.138481 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:54:53.142085 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:54:53.174409 kernel: AVX2 version of gcm_enc/dec engaged. Apr 24 23:54:53.174462 kernel: AES CTR mode by8 optimization enabled Apr 24 23:54:53.176619 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:54:53.195301 kernel: hv_vmbus: Vmbus version:5.2 Apr 24 23:54:53.195179 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:54:53.195304 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:54:53.211535 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:54:53.229407 kernel: hv_vmbus: registering driver hyperv_keyboard Apr 24 23:54:53.243410 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Apr 24 23:54:53.249631 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:54:53.268833 kernel: hv_vmbus: registering driver hv_netvsc Apr 24 23:54:53.268880 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 24 23:54:53.268900 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 24 23:54:53.257603 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:54:53.280407 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 24 23:54:53.292403 kernel: PTP clock support registered Apr 24 23:54:53.292468 kernel: hv_vmbus: registering driver hid_hyperv Apr 24 23:54:53.311398 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Apr 24 23:54:53.317569 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Apr 24 23:54:53.330742 kernel: hv_vmbus: registering driver hv_storvsc Apr 24 23:54:53.330790 kernel: hv_utils: Registering HyperV Utility Driver Apr 24 23:54:53.330810 kernel: hv_vmbus: registering driver hv_utils Apr 24 23:54:53.321854 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:54:53.339677 kernel: hv_utils: Heartbeat IC version 3.0 Apr 24 23:54:53.339727 kernel: hv_utils: Shutdown IC version 3.2 Apr 24 23:54:53.341998 kernel: hv_utils: TimeSync IC version 4.0 Apr 24 23:54:53.343402 kernel: scsi host0: storvsc_host_t Apr 24 23:54:53.789632 systemd-resolved[245]: Clock change detected. Flushing caches. Apr 24 23:54:53.795397 kernel: scsi host1: storvsc_host_t Apr 24 23:54:53.799694 kernel: scsi 1:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Apr 24 23:54:53.804356 kernel: scsi 1:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Apr 24 23:54:53.821599 kernel: sr 1:0:0:2: [sr0] scsi-1 drive Apr 24 23:54:53.821877 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 24 23:54:53.823456 kernel: sr 1:0:0:2: Attached scsi CD-ROM sr0 Apr 24 23:54:53.835874 kernel: sd 1:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Apr 24 23:54:53.836186 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Apr 24 23:54:53.838391 kernel: sd 1:0:0:0: [sda] Write Protect is off Apr 24 23:54:53.842581 kernel: sd 1:0:0:0: [sda] Mode Sense: 0f 00 10 00 Apr 24 23:54:53.842801 kernel: sd 1:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Apr 24 23:54:53.854529 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:54:53.854564 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 24 23:54:53.854716 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Apr 24 23:54:53.881360 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 24 23:54:53.909472 kernel: hv_netvsc 7c1e521f-c7c2-7c1e-521f-c7c27c1e521f eth0: VF slot 1 added Apr 24 23:54:53.920453 kernel: hv_vmbus: registering driver hv_pci Apr 24 23:54:53.920521 kernel: hv_pci 434fe84d-fa72-4dd1-9cac-7ee9f7428642: PCI VMBus probing: Using version 0x10004 Apr 24 23:54:53.929054 kernel: hv_pci 434fe84d-fa72-4dd1-9cac-7ee9f7428642: PCI host bridge to bus fa72:00 Apr 24 23:54:53.929320 kernel: pci_bus fa72:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Apr 24 23:54:53.932690 kernel: pci_bus fa72:00: No busn resource found for root bus, will use [bus 00-ff] Apr 24 23:54:53.942522 kernel: pci fa72:00:02.0: [15b3:1016] type 00 class 0x020000 Apr 24 23:54:53.948412 kernel: pci fa72:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Apr 24 23:54:53.952434 kernel: pci fa72:00:02.0: enabling Extended Tags Apr 24 23:54:53.964483 kernel: pci fa72:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at fa72:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Apr 24 23:54:53.971573 kernel: pci_bus fa72:00: busn_res: [bus 00-ff] end is updated to 00 Apr 24 23:54:53.971862 kernel: pci fa72:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Apr 24 23:54:54.141284 kernel: mlx5_core fa72:00:02.0: enabling device (0000 -> 0002) Apr 24 23:54:54.150373 kernel: mlx5_core fa72:00:02.0: firmware version: 14.30.5026 Apr 24 23:54:54.304699 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Apr 24 23:54:54.317362 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (445) Apr 24 23:54:54.333062 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 24 23:54:54.363572 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Apr 24 23:54:54.378364 kernel: BTRFS: device fsid 077bb4ac-fe88-409a-8f61-fdf28cadf681 devid 1 transid 31 /dev/sda3 scanned by (udev-worker) (443) Apr 24 23:54:54.395654 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Apr 24 23:54:54.403193 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Apr 24 23:54:54.421531 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 24 23:54:54.435573 kernel: hv_netvsc 7c1e521f-c7c2-7c1e-521f-c7c27c1e521f eth0: VF registering: eth1 Apr 24 23:54:54.440364 kernel: mlx5_core fa72:00:02.0 eth1: joined to eth0 Apr 24 23:54:54.440586 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:54:54.442361 kernel: mlx5_core fa72:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Apr 24 23:54:54.454365 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:54:54.463360 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:54:54.476395 kernel: mlx5_core fa72:00:02.0 enP64114s1: renamed from eth1 Apr 24 23:54:55.469397 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:54:55.469469 disk-uuid[606]: The operation has completed successfully. Apr 24 23:54:55.553277 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 24 23:54:55.553711 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 24 23:54:55.589486 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 24 23:54:55.598934 sh[720]: Success Apr 24 23:54:55.628367 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 24 23:54:55.907446 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 24 23:54:55.919458 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 24 23:54:55.926797 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 24 23:54:55.957052 kernel: BTRFS info (device dm-0): first mount of filesystem 077bb4ac-fe88-409a-8f61-fdf28cadf681 Apr 24 23:54:55.957120 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:54:55.961073 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 24 23:54:55.964270 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 24 23:54:55.967020 kernel: BTRFS info (device dm-0): using free space tree Apr 24 23:54:56.189608 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 24 23:54:56.190584 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 24 23:54:56.201607 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 24 23:54:56.207478 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 24 23:54:56.236465 kernel: BTRFS info (device sda6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:54:56.236516 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:54:56.236538 kernel: BTRFS info (device sda6): using free space tree Apr 24 23:54:56.272367 kernel: BTRFS info (device sda6): auto enabling async discard Apr 24 23:54:56.288002 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 24 23:54:56.292361 kernel: BTRFS info (device sda6): last unmount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:54:56.300547 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 24 23:54:56.313580 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 24 23:54:56.321303 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 23:54:56.334552 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 23:54:56.357605 systemd-networkd[904]: lo: Link UP Apr 24 23:54:56.357615 systemd-networkd[904]: lo: Gained carrier Apr 24 23:54:56.359976 systemd-networkd[904]: Enumeration completed Apr 24 23:54:56.360253 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 23:54:56.364152 systemd-networkd[904]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:54:56.364155 systemd-networkd[904]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 23:54:56.365444 systemd[1]: Reached target network.target - Network. Apr 24 23:54:56.421377 kernel: mlx5_core fa72:00:02.0 enP64114s1: Link up Apr 24 23:54:56.463363 kernel: hv_netvsc 7c1e521f-c7c2-7c1e-521f-c7c27c1e521f eth0: Data path switched to VF: enP64114s1 Apr 24 23:54:56.463539 systemd-networkd[904]: enP64114s1: Link UP Apr 24 23:54:56.463672 systemd-networkd[904]: eth0: Link UP Apr 24 23:54:56.468099 systemd-networkd[904]: eth0: Gained carrier Apr 24 23:54:56.468111 systemd-networkd[904]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:54:56.477546 systemd-networkd[904]: enP64114s1: Gained carrier Apr 24 23:54:56.511448 systemd-networkd[904]: eth0: DHCPv4 address 10.0.0.29/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 24 23:54:56.998799 ignition[899]: Ignition 2.19.0 Apr 24 23:54:56.998815 ignition[899]: Stage: fetch-offline Apr 24 23:54:56.998877 ignition[899]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:54:56.998890 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 24 23:54:56.999019 ignition[899]: parsed url from cmdline: "" Apr 24 23:54:56.999026 ignition[899]: no config URL provided Apr 24 23:54:56.999033 ignition[899]: reading system config file "/usr/lib/ignition/user.ign" Apr 24 23:54:56.999045 ignition[899]: no config at "/usr/lib/ignition/user.ign" Apr 24 23:54:56.999052 ignition[899]: failed to fetch config: resource requires networking Apr 24 23:54:56.999400 ignition[899]: Ignition finished successfully Apr 24 23:54:57.022593 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 23:54:57.036635 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 24 23:54:57.057374 ignition[912]: Ignition 2.19.0 Apr 24 23:54:57.057388 ignition[912]: Stage: fetch Apr 24 23:54:57.057635 ignition[912]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:54:57.057650 ignition[912]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 24 23:54:57.057757 ignition[912]: parsed url from cmdline: "" Apr 24 23:54:57.057761 ignition[912]: no config URL provided Apr 24 23:54:57.057767 ignition[912]: reading system config file "/usr/lib/ignition/user.ign" Apr 24 23:54:57.057776 ignition[912]: no config at "/usr/lib/ignition/user.ign" Apr 24 23:54:57.057805 ignition[912]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Apr 24 23:54:57.129224 ignition[912]: GET result: OK Apr 24 23:54:57.129362 ignition[912]: config has been read from IMDS userdata Apr 24 23:54:57.129393 ignition[912]: parsing config with SHA512: 764ea7f98348eea1464460f13b22af133f71e33a0840319faa03ab58dcd01d7f7929e9115d493cabe795ab1ec0cc7f89c32b7c2c66117b2add72e1f224e6c7d4 Apr 24 23:54:57.136075 unknown[912]: fetched base config from "system" Apr 24 23:54:57.136932 ignition[912]: fetch: fetch complete Apr 24 23:54:57.136098 unknown[912]: fetched base config from "system" Apr 24 23:54:57.136945 ignition[912]: fetch: fetch passed Apr 24 23:54:57.136106 unknown[912]: fetched user config from "azure" Apr 24 23:54:57.137003 ignition[912]: Ignition finished successfully Apr 24 23:54:57.142627 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 24 23:54:57.155488 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 24 23:54:57.179586 ignition[918]: Ignition 2.19.0 Apr 24 23:54:57.179600 ignition[918]: Stage: kargs Apr 24 23:54:57.179838 ignition[918]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:54:57.183622 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 24 23:54:57.179853 ignition[918]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 24 23:54:57.180732 ignition[918]: kargs: kargs passed Apr 24 23:54:57.180779 ignition[918]: Ignition finished successfully Apr 24 23:54:57.199589 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 24 23:54:57.218370 ignition[924]: Ignition 2.19.0 Apr 24 23:54:57.218384 ignition[924]: Stage: disks Apr 24 23:54:57.220840 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 24 23:54:57.218601 ignition[924]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:54:57.225902 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 24 23:54:57.218615 ignition[924]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 24 23:54:57.231875 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 24 23:54:57.219852 ignition[924]: disks: disks passed Apr 24 23:54:57.235572 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 23:54:57.219899 ignition[924]: Ignition finished successfully Apr 24 23:54:57.240975 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 23:54:57.243985 systemd[1]: Reached target basic.target - Basic System. Apr 24 23:54:57.269050 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 24 23:54:57.325235 systemd-fsck[932]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Apr 24 23:54:57.329672 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 24 23:54:57.341527 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 24 23:54:57.435620 kernel: EXT4-fs (sda9): mounted filesystem ae73d4a7-3ef8-4c50-8348-4aeb952085ba r/w with ordered data mode. Quota mode: none. Apr 24 23:54:57.436255 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 24 23:54:57.441950 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 24 23:54:57.478479 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 23:54:57.493369 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (943) Apr 24 23:54:57.498364 kernel: BTRFS info (device sda6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:54:57.504471 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:54:57.504538 kernel: BTRFS info (device sda6): using free space tree Apr 24 23:54:57.512992 kernel: BTRFS info (device sda6): auto enabling async discard Apr 24 23:54:57.512533 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 24 23:54:57.519814 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 24 23:54:57.527918 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 24 23:54:57.529416 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 23:54:57.545308 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 23:54:57.550797 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 24 23:54:57.561049 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 24 23:54:57.619681 systemd-networkd[904]: eth0: Gained IPv6LL Apr 24 23:54:58.134449 coreos-metadata[960]: Apr 24 23:54:58.134 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 24 23:54:58.140664 coreos-metadata[960]: Apr 24 23:54:58.140 INFO Fetch successful Apr 24 23:54:58.143630 coreos-metadata[960]: Apr 24 23:54:58.143 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Apr 24 23:54:58.160767 coreos-metadata[960]: Apr 24 23:54:58.160 INFO Fetch successful Apr 24 23:54:58.167436 coreos-metadata[960]: Apr 24 23:54:58.160 INFO wrote hostname ci-4081.3.6-n-b07cc1dc35 to /sysroot/etc/hostname Apr 24 23:54:58.163267 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 24 23:54:58.233675 initrd-setup-root[973]: cut: /sysroot/etc/passwd: No such file or directory Apr 24 23:54:58.266958 initrd-setup-root[980]: cut: /sysroot/etc/group: No such file or directory Apr 24 23:54:58.286224 initrd-setup-root[987]: cut: /sysroot/etc/shadow: No such file or directory Apr 24 23:54:58.293039 initrd-setup-root[994]: cut: /sysroot/etc/gshadow: No such file or directory Apr 24 23:54:59.016733 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 24 23:54:59.033469 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 24 23:54:59.055434 kernel: BTRFS info (device sda6): last unmount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:54:59.059589 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 24 23:54:59.063184 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 24 23:54:59.101300 ignition[1066]: INFO : Ignition 2.19.0 Apr 24 23:54:59.104202 ignition[1066]: INFO : Stage: mount Apr 24 23:54:59.104202 ignition[1066]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:54:59.104202 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 24 23:54:59.104202 ignition[1066]: INFO : mount: mount passed Apr 24 23:54:59.104202 ignition[1066]: INFO : Ignition finished successfully Apr 24 23:54:59.105212 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 24 23:54:59.113367 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 24 23:54:59.131819 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 24 23:54:59.140964 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 23:54:59.161366 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1077) Apr 24 23:54:59.170601 kernel: BTRFS info (device sda6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:54:59.170661 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:54:59.173398 kernel: BTRFS info (device sda6): using free space tree Apr 24 23:54:59.181430 kernel: BTRFS info (device sda6): auto enabling async discard Apr 24 23:54:59.182927 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 23:54:59.208929 ignition[1093]: INFO : Ignition 2.19.0 Apr 24 23:54:59.208929 ignition[1093]: INFO : Stage: files Apr 24 23:54:59.213857 ignition[1093]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:54:59.213857 ignition[1093]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 24 23:54:59.213857 ignition[1093]: DEBUG : files: compiled without relabeling support, skipping Apr 24 23:54:59.224943 ignition[1093]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 24 23:54:59.224943 ignition[1093]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 24 23:54:59.326838 ignition[1093]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 24 23:54:59.331444 ignition[1093]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 24 23:54:59.331444 ignition[1093]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 24 23:54:59.327277 unknown[1093]: wrote ssh authorized keys file for user: core Apr 24 23:54:59.445567 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 24 23:54:59.451548 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 24 23:54:59.482705 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 24 23:54:59.610594 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 24 23:54:59.616392 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 24 23:54:59.616392 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 24 23:54:59.616392 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 24 23:54:59.631039 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 24 23:54:59.631039 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 23:54:59.641093 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 23:54:59.641093 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 23:54:59.641093 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 23:54:59.641093 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 23:54:59.641093 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 23:54:59.641093 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 24 23:54:59.641093 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 24 23:54:59.641093 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 24 23:54:59.641093 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 24 23:54:59.911630 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 24 23:55:00.276066 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 24 23:55:00.276066 ignition[1093]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 24 23:55:00.304958 ignition[1093]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 23:55:00.314548 ignition[1093]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 23:55:00.314548 ignition[1093]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 24 23:55:00.314548 ignition[1093]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 24 23:55:00.314548 ignition[1093]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 24 23:55:00.314548 ignition[1093]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 24 23:55:00.314548 ignition[1093]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 24 23:55:00.314548 ignition[1093]: INFO : files: files passed Apr 24 23:55:00.314548 ignition[1093]: INFO : Ignition finished successfully Apr 24 23:55:00.308437 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 24 23:55:00.327497 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 24 23:55:00.341469 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 24 23:55:00.365580 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 24 23:55:00.365707 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 24 23:55:00.391437 initrd-setup-root-after-ignition[1122]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:55:00.391437 initrd-setup-root-after-ignition[1122]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:55:00.401202 initrd-setup-root-after-ignition[1126]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:55:00.408892 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 23:55:00.412894 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 24 23:55:00.426514 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 24 23:55:00.449691 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 24 23:55:00.449802 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 24 23:55:00.456942 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 24 23:55:00.466740 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 24 23:55:00.470007 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 24 23:55:00.482569 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 24 23:55:00.498330 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 23:55:00.509515 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 24 23:55:00.522417 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:55:00.526480 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:55:00.538520 systemd[1]: Stopped target timers.target - Timer Units. Apr 24 23:55:00.541398 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 24 23:55:00.541533 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 23:55:00.548292 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 24 23:55:00.553566 systemd[1]: Stopped target basic.target - Basic System. Apr 24 23:55:00.559588 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 24 23:55:00.562871 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 23:55:00.578036 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 24 23:55:00.584790 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 24 23:55:00.584974 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 23:55:00.585575 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 24 23:55:00.586064 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 24 23:55:00.587283 systemd[1]: Stopped target swap.target - Swaps. Apr 24 23:55:00.587741 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 24 23:55:00.587905 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 24 23:55:00.588756 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:55:00.589242 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:55:00.589678 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 24 23:55:00.619323 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:55:00.645968 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 24 23:55:00.646185 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 24 23:55:00.652328 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 24 23:55:00.652535 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 23:55:00.667634 systemd[1]: ignition-files.service: Deactivated successfully. Apr 24 23:55:00.670375 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 24 23:55:00.676355 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 24 23:55:00.679574 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 24 23:55:00.696539 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 24 23:55:00.712563 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 24 23:55:00.718497 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 24 23:55:00.733485 ignition[1146]: INFO : Ignition 2.19.0 Apr 24 23:55:00.733485 ignition[1146]: INFO : Stage: umount Apr 24 23:55:00.733485 ignition[1146]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:55:00.733485 ignition[1146]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 24 23:55:00.733485 ignition[1146]: INFO : umount: umount passed Apr 24 23:55:00.733485 ignition[1146]: INFO : Ignition finished successfully Apr 24 23:55:00.718689 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:55:00.727238 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 24 23:55:00.727393 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 23:55:00.733221 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 24 23:55:00.733308 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 24 23:55:00.737484 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 24 23:55:00.737760 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 24 23:55:00.741870 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 24 23:55:00.741914 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 24 23:55:00.742478 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 24 23:55:00.742514 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 24 23:55:00.743017 systemd[1]: Stopped target network.target - Network. Apr 24 23:55:00.743498 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 24 23:55:00.743536 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 23:55:00.745614 systemd[1]: Stopped target paths.target - Path Units. Apr 24 23:55:00.745644 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 24 23:55:00.769559 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:55:00.773219 systemd[1]: Stopped target slices.target - Slice Units. Apr 24 23:55:00.773375 systemd[1]: Stopped target sockets.target - Socket Units. Apr 24 23:55:00.774031 systemd[1]: iscsid.socket: Deactivated successfully. Apr 24 23:55:00.774072 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 23:55:00.774505 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 24 23:55:00.774544 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 23:55:00.774944 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 24 23:55:00.774993 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 24 23:55:00.775416 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 24 23:55:00.775452 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 24 23:55:00.776096 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 24 23:55:00.776392 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 24 23:55:00.777986 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 24 23:55:00.778592 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 24 23:55:00.778690 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 24 23:55:00.812488 systemd-networkd[904]: eth0: DHCPv6 lease lost Apr 24 23:55:00.818304 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 24 23:55:00.818433 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 24 23:55:00.834805 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 24 23:55:00.834922 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 24 23:55:00.839707 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 24 23:55:00.839817 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 24 23:55:00.849035 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 24 23:55:00.849124 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:55:00.923737 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 24 23:55:00.923842 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 24 23:55:00.940469 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 24 23:55:00.946142 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 24 23:55:00.946222 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 23:55:00.952329 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 24 23:55:00.952403 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:55:00.963303 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 24 23:55:00.963376 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 24 23:55:00.968856 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 24 23:55:00.971826 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:55:00.981723 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:55:01.005112 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 24 23:55:01.008204 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:55:01.008689 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 24 23:55:01.008736 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 24 23:55:01.008832 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 24 23:55:01.008865 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:55:01.009263 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 24 23:55:01.067946 kernel: hv_netvsc 7c1e521f-c7c2-7c1e-521f-c7c27c1e521f eth0: Data path switched from VF: enP64114s1 Apr 24 23:55:01.009303 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 24 23:55:01.010704 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 24 23:55:01.010748 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 24 23:55:01.011693 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 24 23:55:01.011730 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:55:01.032656 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 24 23:55:01.042026 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 24 23:55:01.042086 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:55:01.045325 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 24 23:55:01.045389 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:55:01.051628 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 24 23:55:01.051681 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:55:01.059266 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:55:01.059310 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:55:01.071154 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 24 23:55:01.073787 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 24 23:55:01.121974 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 24 23:55:01.122109 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 24 23:55:01.130180 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 24 23:55:01.140609 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 24 23:55:01.150412 systemd[1]: Switching root. Apr 24 23:55:01.230867 systemd-journald[177]: Journal stopped Apr 24 23:54:52.120748 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Apr 24 22:11:38 -00 2026 Apr 24 23:54:52.120774 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:54:52.120786 kernel: BIOS-provided physical RAM map: Apr 24 23:54:52.120792 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 24 23:54:52.120798 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Apr 24 23:54:52.120805 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000000437dfff] usable Apr 24 23:54:52.120812 kernel: BIOS-e820: [mem 0x000000000437e000-0x000000000477dfff] reserved Apr 24 23:54:52.120823 kernel: BIOS-e820: [mem 0x000000000477e000-0x000000003ff1efff] usable Apr 24 23:54:52.120832 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ff73fff] type 20 Apr 24 23:54:52.120839 kernel: BIOS-e820: [mem 0x000000003ff74000-0x000000003ffc8fff] reserved Apr 24 23:54:52.120848 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Apr 24 23:54:52.120856 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Apr 24 23:54:52.120862 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Apr 24 23:54:52.120871 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Apr 24 23:54:52.120883 kernel: printk: bootconsole [earlyser0] enabled Apr 24 23:54:52.120891 kernel: NX (Execute Disable) protection: active Apr 24 23:54:52.120902 kernel: APIC: Static calls initialized Apr 24 23:54:52.120909 kernel: efi: EFI v2.7 by Microsoft Apr 24 23:54:52.120919 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3f420518 Apr 24 23:54:52.120927 kernel: SMBIOS 3.1.0 present. Apr 24 23:54:52.120934 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/08/2026 Apr 24 23:54:52.120941 kernel: Hypervisor detected: Microsoft Hyper-V Apr 24 23:54:52.120953 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Apr 24 23:54:52.120960 kernel: Hyper-V: Host Build 10.0.26102.1277-1-0 Apr 24 23:54:52.120969 kernel: Hyper-V: Nested features: 0x1e0101 Apr 24 23:54:52.120979 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Apr 24 23:54:52.120986 kernel: Hyper-V: Using hypercall for remote TLB flush Apr 24 23:54:52.120993 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Apr 24 23:54:52.121005 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Apr 24 23:54:52.121013 kernel: tsc: Marking TSC unstable due to running on Hyper-V Apr 24 23:54:52.121022 kernel: tsc: Detected 2593.907 MHz processor Apr 24 23:54:52.121032 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 24 23:54:52.121039 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 24 23:54:52.121051 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Apr 24 23:54:52.121060 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 24 23:54:52.121067 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 24 23:54:52.121074 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Apr 24 23:54:52.121085 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Apr 24 23:54:52.121092 kernel: Using GB pages for direct mapping Apr 24 23:54:52.121100 kernel: Secure boot disabled Apr 24 23:54:52.121115 kernel: ACPI: Early table checksum verification disabled Apr 24 23:54:52.121127 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Apr 24 23:54:52.121136 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 24 23:54:52.121144 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 24 23:54:52.121156 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Apr 24 23:54:52.121164 kernel: ACPI: FACS 0x000000003FFFE000 000040 Apr 24 23:54:52.121174 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 24 23:54:52.121184 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 24 23:54:52.121194 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 24 23:54:52.121206 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 24 23:54:52.121213 kernel: ACPI: SRAT 0x000000003FFD4000 0001E0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 24 23:54:52.121223 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 24 23:54:52.121233 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Apr 24 23:54:52.121240 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Apr 24 23:54:52.121252 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Apr 24 23:54:52.121260 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Apr 24 23:54:52.121268 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Apr 24 23:54:52.121284 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Apr 24 23:54:52.121291 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Apr 24 23:54:52.121299 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd41df] Apr 24 23:54:52.121306 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Apr 24 23:54:52.121314 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 24 23:54:52.121324 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 24 23:54:52.121333 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Apr 24 23:54:52.121341 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Apr 24 23:54:52.121348 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Apr 24 23:54:52.121362 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Apr 24 23:54:52.121370 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Apr 24 23:54:52.121380 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Apr 24 23:54:52.121411 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Apr 24 23:54:52.121418 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Apr 24 23:54:52.121431 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Apr 24 23:54:52.121439 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Apr 24 23:54:52.121449 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Apr 24 23:54:52.121461 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Apr 24 23:54:52.121469 kernel: Zone ranges: Apr 24 23:54:52.121481 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 24 23:54:52.121489 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 24 23:54:52.121496 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Apr 24 23:54:52.121504 kernel: Movable zone start for each node Apr 24 23:54:52.121515 kernel: Early memory node ranges Apr 24 23:54:52.121523 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 24 23:54:52.121533 kernel: node 0: [mem 0x0000000000100000-0x000000000437dfff] Apr 24 23:54:52.121544 kernel: node 0: [mem 0x000000000477e000-0x000000003ff1efff] Apr 24 23:54:52.121552 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Apr 24 23:54:52.121564 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Apr 24 23:54:52.121571 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Apr 24 23:54:52.121582 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 24 23:54:52.121591 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 24 23:54:52.121603 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Apr 24 23:54:52.121611 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Apr 24 23:54:52.121619 kernel: ACPI: PM-Timer IO Port: 0x408 Apr 24 23:54:52.121633 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Apr 24 23:54:52.121641 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Apr 24 23:54:52.121653 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 24 23:54:52.121660 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 24 23:54:52.121671 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Apr 24 23:54:52.121680 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 24 23:54:52.121688 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Apr 24 23:54:52.121700 kernel: Booting paravirtualized kernel on Hyper-V Apr 24 23:54:52.121707 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 24 23:54:52.121722 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 24 23:54:52.121731 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 24 23:54:52.121738 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 24 23:54:52.121748 kernel: pcpu-alloc: [0] 0 1 Apr 24 23:54:52.121757 kernel: Hyper-V: PV spinlocks enabled Apr 24 23:54:52.121764 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 24 23:54:52.121777 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:54:52.121785 kernel: random: crng init done Apr 24 23:54:52.121800 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Apr 24 23:54:52.121807 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 24 23:54:52.121815 kernel: Fallback order for Node 0: 0 Apr 24 23:54:52.121826 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2061321 Apr 24 23:54:52.121835 kernel: Policy zone: Normal Apr 24 23:54:52.121845 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 24 23:54:52.121855 kernel: software IO TLB: area num 2. Apr 24 23:54:52.121863 kernel: Memory: 8061212K/8383228K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 321756K reserved, 0K cma-reserved) Apr 24 23:54:52.121875 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 24 23:54:52.121891 kernel: ftrace: allocating 37996 entries in 149 pages Apr 24 23:54:52.121904 kernel: ftrace: allocated 149 pages with 4 groups Apr 24 23:54:52.121912 kernel: Dynamic Preempt: voluntary Apr 24 23:54:52.121927 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 24 23:54:52.121936 kernel: rcu: RCU event tracing is enabled. Apr 24 23:54:52.121949 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 24 23:54:52.121957 kernel: Trampoline variant of Tasks RCU enabled. Apr 24 23:54:52.121965 kernel: Rude variant of Tasks RCU enabled. Apr 24 23:54:52.121977 kernel: Tracing variant of Tasks RCU enabled. Apr 24 23:54:52.121989 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 24 23:54:52.122000 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 24 23:54:52.122010 kernel: Using NULL legacy PIC Apr 24 23:54:52.122023 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Apr 24 23:54:52.122031 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 24 23:54:52.122043 kernel: Console: colour dummy device 80x25 Apr 24 23:54:52.122052 kernel: printk: console [tty1] enabled Apr 24 23:54:52.122060 kernel: printk: console [ttyS0] enabled Apr 24 23:54:52.122074 kernel: printk: bootconsole [earlyser0] disabled Apr 24 23:54:52.122082 kernel: ACPI: Core revision 20230628 Apr 24 23:54:52.122095 kernel: Failed to register legacy timer interrupt Apr 24 23:54:52.122103 kernel: APIC: Switch to symmetric I/O mode setup Apr 24 23:54:52.122112 kernel: Hyper-V: enabling crash_kexec_post_notifiers Apr 24 23:54:52.122124 kernel: Hyper-V: Using IPI hypercalls Apr 24 23:54:52.122132 kernel: APIC: send_IPI() replaced with hv_send_ipi() Apr 24 23:54:52.122143 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Apr 24 23:54:52.122153 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Apr 24 23:54:52.122166 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Apr 24 23:54:52.122176 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Apr 24 23:54:52.122183 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Apr 24 23:54:52.122195 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Apr 24 23:54:52.122204 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 24 23:54:52.122213 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 24 23:54:52.122225 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 24 23:54:52.122233 kernel: Spectre V2 : Mitigation: Retpolines Apr 24 23:54:52.122245 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 24 23:54:52.122253 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 24 23:54:52.122268 kernel: RETBleed: Vulnerable Apr 24 23:54:52.122276 kernel: Speculative Store Bypass: Vulnerable Apr 24 23:54:52.122286 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Apr 24 23:54:52.122296 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 24 23:54:52.122304 kernel: active return thunk: its_return_thunk Apr 24 23:54:52.122315 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 24 23:54:52.122324 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 24 23:54:52.122332 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 24 23:54:52.122353 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 24 23:54:52.122363 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 24 23:54:52.122375 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 24 23:54:52.124424 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 24 23:54:52.124448 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 24 23:54:52.124464 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 24 23:54:52.124479 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 24 23:54:52.124494 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 24 23:54:52.124508 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 24 23:54:52.124523 kernel: Freeing SMP alternatives memory: 32K Apr 24 23:54:52.124537 kernel: pid_max: default: 32768 minimum: 301 Apr 24 23:54:52.124551 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 24 23:54:52.124565 kernel: landlock: Up and running. Apr 24 23:54:52.124578 kernel: SELinux: Initializing. Apr 24 23:54:52.124598 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 24 23:54:52.124612 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 24 23:54:52.124626 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Apr 24 23:54:52.124639 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 23:54:52.124653 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 23:54:52.124667 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 23:54:52.124681 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Apr 24 23:54:52.124695 kernel: signal: max sigframe size: 3632 Apr 24 23:54:52.124708 kernel: rcu: Hierarchical SRCU implementation. Apr 24 23:54:52.124726 kernel: rcu: Max phase no-delay instances is 400. Apr 24 23:54:52.124741 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 24 23:54:52.124756 kernel: smp: Bringing up secondary CPUs ... Apr 24 23:54:52.124770 kernel: smpboot: x86: Booting SMP configuration: Apr 24 23:54:52.124783 kernel: .... node #0, CPUs: #1 Apr 24 23:54:52.124799 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Apr 24 23:54:52.124815 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 24 23:54:52.124830 kernel: smp: Brought up 1 node, 2 CPUs Apr 24 23:54:52.124844 kernel: smpboot: Max logical packages: 1 Apr 24 23:54:52.124862 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Apr 24 23:54:52.124876 kernel: devtmpfs: initialized Apr 24 23:54:52.124891 kernel: x86/mm: Memory block size: 128MB Apr 24 23:54:52.124906 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Apr 24 23:54:52.124920 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 24 23:54:52.124935 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 24 23:54:52.124949 kernel: pinctrl core: initialized pinctrl subsystem Apr 24 23:54:52.124963 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 24 23:54:52.124978 kernel: audit: initializing netlink subsys (disabled) Apr 24 23:54:52.124995 kernel: audit: type=2000 audit(1777074890.030:1): state=initialized audit_enabled=0 res=1 Apr 24 23:54:52.125009 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 24 23:54:52.125023 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 24 23:54:52.125037 kernel: cpuidle: using governor menu Apr 24 23:54:52.125052 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 24 23:54:52.125067 kernel: dca service started, version 1.12.1 Apr 24 23:54:52.125081 kernel: e820: reserve RAM buffer [mem 0x0437e000-0x07ffffff] Apr 24 23:54:52.125096 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Apr 24 23:54:52.125110 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 24 23:54:52.125128 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 24 23:54:52.125143 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 24 23:54:52.125157 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 24 23:54:52.125172 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 24 23:54:52.125187 kernel: ACPI: Added _OSI(Module Device) Apr 24 23:54:52.125202 kernel: ACPI: Added _OSI(Processor Device) Apr 24 23:54:52.125216 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 24 23:54:52.125231 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 24 23:54:52.125249 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 24 23:54:52.125264 kernel: ACPI: Interpreter enabled Apr 24 23:54:52.125279 kernel: ACPI: PM: (supports S0 S5) Apr 24 23:54:52.125294 kernel: ACPI: Using IOAPIC for interrupt routing Apr 24 23:54:52.125309 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 24 23:54:52.125325 kernel: PCI: Ignoring E820 reservations for host bridge windows Apr 24 23:54:52.125340 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Apr 24 23:54:52.125355 kernel: iommu: Default domain type: Translated Apr 24 23:54:52.125371 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 24 23:54:52.125406 kernel: efivars: Registered efivars operations Apr 24 23:54:52.125425 kernel: PCI: Using ACPI for IRQ routing Apr 24 23:54:52.125440 kernel: PCI: System does not support PCI Apr 24 23:54:52.125454 kernel: vgaarb: loaded Apr 24 23:54:52.125469 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Apr 24 23:54:52.125484 kernel: VFS: Disk quotas dquot_6.6.0 Apr 24 23:54:52.125498 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 24 23:54:52.125513 kernel: pnp: PnP ACPI init Apr 24 23:54:52.125529 kernel: pnp: PnP ACPI: found 3 devices Apr 24 23:54:52.125543 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 24 23:54:52.125561 kernel: NET: Registered PF_INET protocol family Apr 24 23:54:52.125576 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 24 23:54:52.125591 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Apr 24 23:54:52.125603 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 24 23:54:52.125618 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 24 23:54:52.125631 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Apr 24 23:54:52.125646 kernel: TCP: Hash tables configured (established 65536 bind 65536) Apr 24 23:54:52.125662 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 24 23:54:52.125675 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 24 23:54:52.125694 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 24 23:54:52.125708 kernel: NET: Registered PF_XDP protocol family Apr 24 23:54:52.125724 kernel: PCI: CLS 0 bytes, default 64 Apr 24 23:54:52.125739 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 24 23:54:52.125754 kernel: software IO TLB: mapped [mem 0x000000003a878000-0x000000003e878000] (64MB) Apr 24 23:54:52.125768 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 24 23:54:52.125782 kernel: Initialise system trusted keyrings Apr 24 23:54:52.125797 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Apr 24 23:54:52.125815 kernel: Key type asymmetric registered Apr 24 23:54:52.125829 kernel: Asymmetric key parser 'x509' registered Apr 24 23:54:52.125842 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 24 23:54:52.125856 kernel: io scheduler mq-deadline registered Apr 24 23:54:52.125870 kernel: io scheduler kyber registered Apr 24 23:54:52.125885 kernel: io scheduler bfq registered Apr 24 23:54:52.125900 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 24 23:54:52.125915 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 24 23:54:52.125930 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 24 23:54:52.125944 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Apr 24 23:54:52.125963 kernel: i8042: PNP: No PS/2 controller found. Apr 24 23:54:52.126158 kernel: rtc_cmos 00:02: registered as rtc0 Apr 24 23:54:52.126304 kernel: rtc_cmos 00:02: setting system clock to 2026-04-24T23:54:51 UTC (1777074891) Apr 24 23:54:52.126559 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Apr 24 23:54:52.126582 kernel: intel_pstate: CPU model not supported Apr 24 23:54:52.126596 kernel: efifb: probing for efifb Apr 24 23:54:52.126610 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Apr 24 23:54:52.126630 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Apr 24 23:54:52.126644 kernel: efifb: scrolling: redraw Apr 24 23:54:52.126659 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 24 23:54:52.126672 kernel: Console: switching to colour frame buffer device 128x48 Apr 24 23:54:52.126687 kernel: fb0: EFI VGA frame buffer device Apr 24 23:54:52.126704 kernel: pstore: Using crash dump compression: deflate Apr 24 23:54:52.126718 kernel: pstore: Registered efi_pstore as persistent store backend Apr 24 23:54:52.126733 kernel: NET: Registered PF_INET6 protocol family Apr 24 23:54:52.126748 kernel: Segment Routing with IPv6 Apr 24 23:54:52.126767 kernel: In-situ OAM (IOAM) with IPv6 Apr 24 23:54:52.126782 kernel: NET: Registered PF_PACKET protocol family Apr 24 23:54:52.126797 kernel: Key type dns_resolver registered Apr 24 23:54:52.126812 kernel: IPI shorthand broadcast: enabled Apr 24 23:54:52.126827 kernel: sched_clock: Marking stable (871002700, 54754400)->(1190490400, -264733300) Apr 24 23:54:52.126843 kernel: registered taskstats version 1 Apr 24 23:54:52.126858 kernel: Loading compiled-in X.509 certificates Apr 24 23:54:52.126873 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 507f116e6718ec7535b55c873de10edf9b6fe124' Apr 24 23:54:52.126888 kernel: Key type .fscrypt registered Apr 24 23:54:52.126906 kernel: Key type fscrypt-provisioning registered Apr 24 23:54:52.126920 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 24 23:54:52.126935 kernel: ima: Allocated hash algorithm: sha1 Apr 24 23:54:52.126950 kernel: ima: No architecture policies found Apr 24 23:54:52.126966 kernel: clk: Disabling unused clocks Apr 24 23:54:52.126981 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 24 23:54:52.126996 kernel: Write protecting the kernel read-only data: 36864k Apr 24 23:54:52.127011 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 24 23:54:52.127026 kernel: Run /init as init process Apr 24 23:54:52.127044 kernel: with arguments: Apr 24 23:54:52.127059 kernel: /init Apr 24 23:54:52.127074 kernel: with environment: Apr 24 23:54:52.127089 kernel: HOME=/ Apr 24 23:54:52.127103 kernel: TERM=linux Apr 24 23:54:52.127121 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 24 23:54:52.127139 systemd[1]: Detected virtualization microsoft. Apr 24 23:54:52.127156 systemd[1]: Detected architecture x86-64. Apr 24 23:54:52.127174 systemd[1]: Running in initrd. Apr 24 23:54:52.127189 systemd[1]: No hostname configured, using default hostname. Apr 24 23:54:52.127204 systemd[1]: Hostname set to . Apr 24 23:54:52.127220 systemd[1]: Initializing machine ID from random generator. Apr 24 23:54:52.127236 systemd[1]: Queued start job for default target initrd.target. Apr 24 23:54:52.127251 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:54:52.127267 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:54:52.127284 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 24 23:54:52.127303 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 23:54:52.127318 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 24 23:54:52.127334 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 24 23:54:52.127353 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 24 23:54:52.127369 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 24 23:54:52.127398 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:54:52.127412 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:54:52.127438 systemd[1]: Reached target paths.target - Path Units. Apr 24 23:54:52.127451 systemd[1]: Reached target slices.target - Slice Units. Apr 24 23:54:52.127464 systemd[1]: Reached target swap.target - Swaps. Apr 24 23:54:52.127477 systemd[1]: Reached target timers.target - Timer Units. Apr 24 23:54:52.127492 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 23:54:52.127508 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 23:54:52.127524 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 24 23:54:52.127539 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 24 23:54:52.127555 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:54:52.127574 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 23:54:52.127590 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:54:52.127606 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 23:54:52.127622 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 24 23:54:52.127638 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 23:54:52.127654 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 24 23:54:52.127670 systemd[1]: Starting systemd-fsck-usr.service... Apr 24 23:54:52.127687 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 23:54:52.127705 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 23:54:52.127746 systemd-journald[177]: Collecting audit messages is disabled. Apr 24 23:54:52.127782 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:54:52.127798 systemd-journald[177]: Journal started Apr 24 23:54:52.127834 systemd-journald[177]: Runtime Journal (/run/log/journal/2fffbde258e84252a73eaa97eb29f100) is 8.0M, max 158.7M, 150.7M free. Apr 24 23:54:52.139405 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 23:54:52.143755 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 24 23:54:52.147568 systemd-modules-load[178]: Inserted module 'overlay' Apr 24 23:54:52.153610 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:54:52.161293 systemd[1]: Finished systemd-fsck-usr.service. Apr 24 23:54:52.163971 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:54:52.181556 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:54:52.190570 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 24 23:54:52.199511 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 23:54:52.214604 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 24 23:54:52.221164 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:54:52.231123 systemd-modules-load[178]: Inserted module 'br_netfilter' Apr 24 23:54:52.234414 kernel: Bridge firewalling registered Apr 24 23:54:52.236552 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 23:54:52.236928 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 23:54:52.237336 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:54:52.238307 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:54:52.278076 dracut-cmdline[209]: dracut-dracut-053 Apr 24 23:54:52.278076 dracut-cmdline[209]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:54:52.243530 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 24 23:54:52.244517 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 23:54:52.267818 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:54:52.279561 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:54:52.317603 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 23:54:52.357226 systemd-resolved[245]: Positive Trust Anchors: Apr 24 23:54:52.357243 systemd-resolved[245]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 23:54:52.357307 systemd-resolved[245]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 23:54:52.387475 systemd-resolved[245]: Defaulting to hostname 'linux'. Apr 24 23:54:52.391525 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 23:54:52.400761 kernel: SCSI subsystem initialized Apr 24 23:54:52.400936 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:54:52.411402 kernel: Loading iSCSI transport class v2.0-870. Apr 24 23:54:52.423403 kernel: iscsi: registered transport (tcp) Apr 24 23:54:52.444310 kernel: iscsi: registered transport (qla4xxx) Apr 24 23:54:52.444399 kernel: QLogic iSCSI HBA Driver Apr 24 23:54:52.481315 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 24 23:54:52.491600 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 24 23:54:52.522671 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 24 23:54:52.522742 kernel: device-mapper: uevent: version 1.0.3 Apr 24 23:54:52.527402 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 24 23:54:52.566403 kernel: raid6: avx512x4 gen() 18322 MB/s Apr 24 23:54:52.586401 kernel: raid6: avx512x2 gen() 18339 MB/s Apr 24 23:54:52.605396 kernel: raid6: avx512x1 gen() 18304 MB/s Apr 24 23:54:52.624398 kernel: raid6: avx2x4 gen() 18174 MB/s Apr 24 23:54:52.644399 kernel: raid6: avx2x2 gen() 18036 MB/s Apr 24 23:54:52.664932 kernel: raid6: avx2x1 gen() 13753 MB/s Apr 24 23:54:52.664968 kernel: raid6: using algorithm avx512x2 gen() 18339 MB/s Apr 24 23:54:52.686993 kernel: raid6: .... xor() 30368 MB/s, rmw enabled Apr 24 23:54:52.687024 kernel: raid6: using avx512x2 recovery algorithm Apr 24 23:54:52.710407 kernel: xor: automatically using best checksumming function avx Apr 24 23:54:52.859425 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 24 23:54:52.869375 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 24 23:54:52.881581 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:54:52.896360 systemd-udevd[399]: Using default interface naming scheme 'v255'. Apr 24 23:54:52.901067 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:54:52.916539 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 24 23:54:52.932812 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Apr 24 23:54:52.961423 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 23:54:52.970640 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 23:54:53.015417 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:54:53.027679 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 24 23:54:53.050605 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 24 23:54:53.055635 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 23:54:53.063183 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:54:53.070784 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 23:54:53.088536 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 24 23:54:53.118225 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 24 23:54:53.122081 kernel: cryptd: max_cpu_qlen set to 1000 Apr 24 23:54:53.130786 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 24 23:54:53.130934 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:54:53.134904 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:54:53.138294 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:54:53.138481 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:54:53.142085 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:54:53.174409 kernel: AVX2 version of gcm_enc/dec engaged. Apr 24 23:54:53.174462 kernel: AES CTR mode by8 optimization enabled Apr 24 23:54:53.176619 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:54:53.195301 kernel: hv_vmbus: Vmbus version:5.2 Apr 24 23:54:53.195179 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:54:53.195304 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:54:53.211535 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:54:53.229407 kernel: hv_vmbus: registering driver hyperv_keyboard Apr 24 23:54:53.243410 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Apr 24 23:54:53.249631 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:54:53.268833 kernel: hv_vmbus: registering driver hv_netvsc Apr 24 23:54:53.268880 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 24 23:54:53.268900 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 24 23:54:53.257603 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:54:53.280407 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 24 23:54:53.292403 kernel: PTP clock support registered Apr 24 23:54:53.292468 kernel: hv_vmbus: registering driver hid_hyperv Apr 24 23:54:53.311398 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Apr 24 23:54:53.317569 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Apr 24 23:54:53.330742 kernel: hv_vmbus: registering driver hv_storvsc Apr 24 23:54:53.330790 kernel: hv_utils: Registering HyperV Utility Driver Apr 24 23:54:53.330810 kernel: hv_vmbus: registering driver hv_utils Apr 24 23:54:53.321854 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:54:53.339677 kernel: hv_utils: Heartbeat IC version 3.0 Apr 24 23:54:53.339727 kernel: hv_utils: Shutdown IC version 3.2 Apr 24 23:54:53.341998 kernel: hv_utils: TimeSync IC version 4.0 Apr 24 23:54:53.343402 kernel: scsi host0: storvsc_host_t Apr 24 23:54:53.789632 systemd-resolved[245]: Clock change detected. Flushing caches. Apr 24 23:54:53.795397 kernel: scsi host1: storvsc_host_t Apr 24 23:54:53.799694 kernel: scsi 1:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Apr 24 23:54:53.804356 kernel: scsi 1:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Apr 24 23:54:53.821599 kernel: sr 1:0:0:2: [sr0] scsi-1 drive Apr 24 23:54:53.821877 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 24 23:54:53.823456 kernel: sr 1:0:0:2: Attached scsi CD-ROM sr0 Apr 24 23:54:53.835874 kernel: sd 1:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Apr 24 23:54:53.836186 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Apr 24 23:54:53.838391 kernel: sd 1:0:0:0: [sda] Write Protect is off Apr 24 23:54:53.842581 kernel: sd 1:0:0:0: [sda] Mode Sense: 0f 00 10 00 Apr 24 23:54:53.842801 kernel: sd 1:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Apr 24 23:54:53.854529 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:54:53.854564 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 24 23:54:53.854716 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Apr 24 23:54:53.881360 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 24 23:54:53.909472 kernel: hv_netvsc 7c1e521f-c7c2-7c1e-521f-c7c27c1e521f eth0: VF slot 1 added Apr 24 23:54:53.920453 kernel: hv_vmbus: registering driver hv_pci Apr 24 23:54:53.920521 kernel: hv_pci 434fe84d-fa72-4dd1-9cac-7ee9f7428642: PCI VMBus probing: Using version 0x10004 Apr 24 23:54:53.929054 kernel: hv_pci 434fe84d-fa72-4dd1-9cac-7ee9f7428642: PCI host bridge to bus fa72:00 Apr 24 23:54:53.929320 kernel: pci_bus fa72:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Apr 24 23:54:53.932690 kernel: pci_bus fa72:00: No busn resource found for root bus, will use [bus 00-ff] Apr 24 23:54:53.942522 kernel: pci fa72:00:02.0: [15b3:1016] type 00 class 0x020000 Apr 24 23:54:53.948412 kernel: pci fa72:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Apr 24 23:54:53.952434 kernel: pci fa72:00:02.0: enabling Extended Tags Apr 24 23:54:53.964483 kernel: pci fa72:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at fa72:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Apr 24 23:54:53.971573 kernel: pci_bus fa72:00: busn_res: [bus 00-ff] end is updated to 00 Apr 24 23:54:53.971862 kernel: pci fa72:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Apr 24 23:54:54.141284 kernel: mlx5_core fa72:00:02.0: enabling device (0000 -> 0002) Apr 24 23:54:54.150373 kernel: mlx5_core fa72:00:02.0: firmware version: 14.30.5026 Apr 24 23:54:54.304699 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Apr 24 23:54:54.317362 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (445) Apr 24 23:54:54.333062 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 24 23:54:54.363572 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Apr 24 23:54:54.378364 kernel: BTRFS: device fsid 077bb4ac-fe88-409a-8f61-fdf28cadf681 devid 1 transid 31 /dev/sda3 scanned by (udev-worker) (443) Apr 24 23:54:54.395654 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Apr 24 23:54:54.403193 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Apr 24 23:54:54.421531 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 24 23:54:54.435573 kernel: hv_netvsc 7c1e521f-c7c2-7c1e-521f-c7c27c1e521f eth0: VF registering: eth1 Apr 24 23:54:54.440364 kernel: mlx5_core fa72:00:02.0 eth1: joined to eth0 Apr 24 23:54:54.440586 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:54:54.442361 kernel: mlx5_core fa72:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Apr 24 23:54:54.454365 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:54:54.463360 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:54:54.476395 kernel: mlx5_core fa72:00:02.0 enP64114s1: renamed from eth1 Apr 24 23:54:55.469397 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:54:55.469469 disk-uuid[606]: The operation has completed successfully. Apr 24 23:54:55.553277 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 24 23:54:55.553711 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 24 23:54:55.589486 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 24 23:54:55.598934 sh[720]: Success Apr 24 23:54:55.628367 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 24 23:54:55.907446 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 24 23:54:55.919458 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 24 23:54:55.926797 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 24 23:54:55.957052 kernel: BTRFS info (device dm-0): first mount of filesystem 077bb4ac-fe88-409a-8f61-fdf28cadf681 Apr 24 23:54:55.957120 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:54:55.961073 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 24 23:54:55.964270 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 24 23:54:55.967020 kernel: BTRFS info (device dm-0): using free space tree Apr 24 23:54:56.189608 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 24 23:54:56.190584 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 24 23:54:56.201607 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 24 23:54:56.207478 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 24 23:54:56.236465 kernel: BTRFS info (device sda6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:54:56.236516 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:54:56.236538 kernel: BTRFS info (device sda6): using free space tree Apr 24 23:54:56.272367 kernel: BTRFS info (device sda6): auto enabling async discard Apr 24 23:54:56.288002 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 24 23:54:56.292361 kernel: BTRFS info (device sda6): last unmount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:54:56.300547 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 24 23:54:56.313580 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 24 23:54:56.321303 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 23:54:56.334552 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 23:54:56.357605 systemd-networkd[904]: lo: Link UP Apr 24 23:54:56.357615 systemd-networkd[904]: lo: Gained carrier Apr 24 23:54:56.359976 systemd-networkd[904]: Enumeration completed Apr 24 23:54:56.360253 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 23:54:56.364152 systemd-networkd[904]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:54:56.364155 systemd-networkd[904]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 23:54:56.365444 systemd[1]: Reached target network.target - Network. Apr 24 23:54:56.421377 kernel: mlx5_core fa72:00:02.0 enP64114s1: Link up Apr 24 23:54:56.463363 kernel: hv_netvsc 7c1e521f-c7c2-7c1e-521f-c7c27c1e521f eth0: Data path switched to VF: enP64114s1 Apr 24 23:54:56.463539 systemd-networkd[904]: enP64114s1: Link UP Apr 24 23:54:56.463672 systemd-networkd[904]: eth0: Link UP Apr 24 23:54:56.468099 systemd-networkd[904]: eth0: Gained carrier Apr 24 23:54:56.468111 systemd-networkd[904]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:54:56.477546 systemd-networkd[904]: enP64114s1: Gained carrier Apr 24 23:54:56.511448 systemd-networkd[904]: eth0: DHCPv4 address 10.0.0.29/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 24 23:54:56.998799 ignition[899]: Ignition 2.19.0 Apr 24 23:54:56.998815 ignition[899]: Stage: fetch-offline Apr 24 23:54:56.998877 ignition[899]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:54:56.998890 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 24 23:54:56.999019 ignition[899]: parsed url from cmdline: "" Apr 24 23:54:56.999026 ignition[899]: no config URL provided Apr 24 23:54:56.999033 ignition[899]: reading system config file "/usr/lib/ignition/user.ign" Apr 24 23:54:56.999045 ignition[899]: no config at "/usr/lib/ignition/user.ign" Apr 24 23:54:56.999052 ignition[899]: failed to fetch config: resource requires networking Apr 24 23:54:56.999400 ignition[899]: Ignition finished successfully Apr 24 23:54:57.022593 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 23:54:57.036635 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 24 23:54:57.057374 ignition[912]: Ignition 2.19.0 Apr 24 23:54:57.057388 ignition[912]: Stage: fetch Apr 24 23:54:57.057635 ignition[912]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:54:57.057650 ignition[912]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 24 23:54:57.057757 ignition[912]: parsed url from cmdline: "" Apr 24 23:54:57.057761 ignition[912]: no config URL provided Apr 24 23:54:57.057767 ignition[912]: reading system config file "/usr/lib/ignition/user.ign" Apr 24 23:54:57.057776 ignition[912]: no config at "/usr/lib/ignition/user.ign" Apr 24 23:54:57.057805 ignition[912]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Apr 24 23:54:57.129224 ignition[912]: GET result: OK Apr 24 23:54:57.129362 ignition[912]: config has been read from IMDS userdata Apr 24 23:54:57.129393 ignition[912]: parsing config with SHA512: 764ea7f98348eea1464460f13b22af133f71e33a0840319faa03ab58dcd01d7f7929e9115d493cabe795ab1ec0cc7f89c32b7c2c66117b2add72e1f224e6c7d4 Apr 24 23:54:57.136075 unknown[912]: fetched base config from "system" Apr 24 23:54:57.136932 ignition[912]: fetch: fetch complete Apr 24 23:54:57.136098 unknown[912]: fetched base config from "system" Apr 24 23:54:57.136945 ignition[912]: fetch: fetch passed Apr 24 23:54:57.136106 unknown[912]: fetched user config from "azure" Apr 24 23:54:57.137003 ignition[912]: Ignition finished successfully Apr 24 23:54:57.142627 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 24 23:54:57.155488 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 24 23:54:57.179586 ignition[918]: Ignition 2.19.0 Apr 24 23:54:57.179600 ignition[918]: Stage: kargs Apr 24 23:54:57.179838 ignition[918]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:54:57.183622 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 24 23:54:57.179853 ignition[918]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 24 23:54:57.180732 ignition[918]: kargs: kargs passed Apr 24 23:54:57.180779 ignition[918]: Ignition finished successfully Apr 24 23:54:57.199589 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 24 23:54:57.218370 ignition[924]: Ignition 2.19.0 Apr 24 23:54:57.218384 ignition[924]: Stage: disks Apr 24 23:54:57.220840 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 24 23:54:57.218601 ignition[924]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:54:57.225902 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 24 23:54:57.218615 ignition[924]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 24 23:54:57.231875 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 24 23:54:57.219852 ignition[924]: disks: disks passed Apr 24 23:54:57.235572 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 23:54:57.219899 ignition[924]: Ignition finished successfully Apr 24 23:54:57.240975 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 23:54:57.243985 systemd[1]: Reached target basic.target - Basic System. Apr 24 23:54:57.269050 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 24 23:54:57.325235 systemd-fsck[932]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Apr 24 23:54:57.329672 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 24 23:54:57.341527 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 24 23:54:57.435620 kernel: EXT4-fs (sda9): mounted filesystem ae73d4a7-3ef8-4c50-8348-4aeb952085ba r/w with ordered data mode. Quota mode: none. Apr 24 23:54:57.436255 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 24 23:54:57.441950 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 24 23:54:57.478479 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 23:54:57.493369 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (943) Apr 24 23:54:57.498364 kernel: BTRFS info (device sda6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:54:57.504471 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:54:57.504538 kernel: BTRFS info (device sda6): using free space tree Apr 24 23:54:57.512992 kernel: BTRFS info (device sda6): auto enabling async discard Apr 24 23:54:57.512533 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 24 23:54:57.519814 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 24 23:54:57.527918 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 24 23:54:57.529416 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 23:54:57.545308 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 23:54:57.550797 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 24 23:54:57.561049 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 24 23:54:57.619681 systemd-networkd[904]: eth0: Gained IPv6LL Apr 24 23:54:58.134449 coreos-metadata[960]: Apr 24 23:54:58.134 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 24 23:54:58.140664 coreos-metadata[960]: Apr 24 23:54:58.140 INFO Fetch successful Apr 24 23:54:58.143630 coreos-metadata[960]: Apr 24 23:54:58.143 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Apr 24 23:54:58.160767 coreos-metadata[960]: Apr 24 23:54:58.160 INFO Fetch successful Apr 24 23:54:58.167436 coreos-metadata[960]: Apr 24 23:54:58.160 INFO wrote hostname ci-4081.3.6-n-b07cc1dc35 to /sysroot/etc/hostname Apr 24 23:54:58.163267 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 24 23:54:58.233675 initrd-setup-root[973]: cut: /sysroot/etc/passwd: No such file or directory Apr 24 23:54:58.266958 initrd-setup-root[980]: cut: /sysroot/etc/group: No such file or directory Apr 24 23:54:58.286224 initrd-setup-root[987]: cut: /sysroot/etc/shadow: No such file or directory Apr 24 23:54:58.293039 initrd-setup-root[994]: cut: /sysroot/etc/gshadow: No such file or directory Apr 24 23:54:59.016733 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 24 23:54:59.033469 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 24 23:54:59.055434 kernel: BTRFS info (device sda6): last unmount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:54:59.059589 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 24 23:54:59.063184 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 24 23:54:59.101300 ignition[1066]: INFO : Ignition 2.19.0 Apr 24 23:54:59.104202 ignition[1066]: INFO : Stage: mount Apr 24 23:54:59.104202 ignition[1066]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:54:59.104202 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 24 23:54:59.104202 ignition[1066]: INFO : mount: mount passed Apr 24 23:54:59.104202 ignition[1066]: INFO : Ignition finished successfully Apr 24 23:54:59.105212 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 24 23:54:59.113367 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 24 23:54:59.131819 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 24 23:54:59.140964 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 23:54:59.161366 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1077) Apr 24 23:54:59.170601 kernel: BTRFS info (device sda6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:54:59.170661 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:54:59.173398 kernel: BTRFS info (device sda6): using free space tree Apr 24 23:54:59.181430 kernel: BTRFS info (device sda6): auto enabling async discard Apr 24 23:54:59.182927 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 23:54:59.208929 ignition[1093]: INFO : Ignition 2.19.0 Apr 24 23:54:59.208929 ignition[1093]: INFO : Stage: files Apr 24 23:54:59.213857 ignition[1093]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:54:59.213857 ignition[1093]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 24 23:54:59.213857 ignition[1093]: DEBUG : files: compiled without relabeling support, skipping Apr 24 23:54:59.224943 ignition[1093]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 24 23:54:59.224943 ignition[1093]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 24 23:54:59.326838 ignition[1093]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 24 23:54:59.331444 ignition[1093]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 24 23:54:59.331444 ignition[1093]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 24 23:54:59.327277 unknown[1093]: wrote ssh authorized keys file for user: core Apr 24 23:54:59.445567 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 24 23:54:59.451548 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 24 23:54:59.482705 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 24 23:54:59.610594 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 24 23:54:59.616392 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 24 23:54:59.616392 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 24 23:54:59.616392 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 24 23:54:59.631039 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 24 23:54:59.631039 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 23:54:59.641093 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 23:54:59.641093 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 23:54:59.641093 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 23:54:59.641093 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 23:54:59.641093 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 23:54:59.641093 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 24 23:54:59.641093 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 24 23:54:59.641093 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 24 23:54:59.641093 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 24 23:54:59.911630 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 24 23:55:00.276066 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 24 23:55:00.276066 ignition[1093]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 24 23:55:00.304958 ignition[1093]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 23:55:00.314548 ignition[1093]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 23:55:00.314548 ignition[1093]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 24 23:55:00.314548 ignition[1093]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 24 23:55:00.314548 ignition[1093]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 24 23:55:00.314548 ignition[1093]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 24 23:55:00.314548 ignition[1093]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 24 23:55:00.314548 ignition[1093]: INFO : files: files passed Apr 24 23:55:00.314548 ignition[1093]: INFO : Ignition finished successfully Apr 24 23:55:00.308437 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 24 23:55:00.327497 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 24 23:55:00.341469 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 24 23:55:00.365580 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 24 23:55:00.365707 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 24 23:55:00.391437 initrd-setup-root-after-ignition[1122]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:55:00.391437 initrd-setup-root-after-ignition[1122]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:55:00.401202 initrd-setup-root-after-ignition[1126]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:55:00.408892 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 23:55:00.412894 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 24 23:55:00.426514 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 24 23:55:00.449691 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 24 23:55:00.449802 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 24 23:55:00.456942 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 24 23:55:00.466740 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 24 23:55:00.470007 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 24 23:55:00.482569 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 24 23:55:00.498330 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 23:55:00.509515 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 24 23:55:00.522417 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:55:00.526480 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:55:00.538520 systemd[1]: Stopped target timers.target - Timer Units. Apr 24 23:55:00.541398 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 24 23:55:00.541533 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 23:55:00.548292 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 24 23:55:00.553566 systemd[1]: Stopped target basic.target - Basic System. Apr 24 23:55:00.559588 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 24 23:55:00.562871 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 23:55:00.578036 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 24 23:55:00.584790 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 24 23:55:00.584974 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 23:55:00.585575 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 24 23:55:00.586064 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 24 23:55:00.587283 systemd[1]: Stopped target swap.target - Swaps. Apr 24 23:55:00.587741 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 24 23:55:00.587905 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 24 23:55:00.588756 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:55:00.589242 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:55:00.589678 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 24 23:55:00.619323 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:55:00.645968 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 24 23:55:00.646185 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 24 23:55:00.652328 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 24 23:55:00.652535 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 23:55:00.667634 systemd[1]: ignition-files.service: Deactivated successfully. Apr 24 23:55:00.670375 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 24 23:55:00.676355 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 24 23:55:00.679574 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 24 23:55:00.696539 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 24 23:55:00.712563 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 24 23:55:00.718497 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 24 23:55:00.733485 ignition[1146]: INFO : Ignition 2.19.0 Apr 24 23:55:00.733485 ignition[1146]: INFO : Stage: umount Apr 24 23:55:00.733485 ignition[1146]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:55:00.733485 ignition[1146]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 24 23:55:00.733485 ignition[1146]: INFO : umount: umount passed Apr 24 23:55:00.733485 ignition[1146]: INFO : Ignition finished successfully Apr 24 23:55:00.718689 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:55:00.727238 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 24 23:55:00.727393 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 23:55:00.733221 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 24 23:55:00.733308 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 24 23:55:00.737484 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 24 23:55:00.737760 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 24 23:55:00.741870 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 24 23:55:00.741914 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 24 23:55:00.742478 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 24 23:55:00.742514 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 24 23:55:00.743017 systemd[1]: Stopped target network.target - Network. Apr 24 23:55:00.743498 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 24 23:55:00.743536 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 23:55:00.745614 systemd[1]: Stopped target paths.target - Path Units. Apr 24 23:55:00.745644 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 24 23:55:00.769559 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:55:00.773219 systemd[1]: Stopped target slices.target - Slice Units. Apr 24 23:55:00.773375 systemd[1]: Stopped target sockets.target - Socket Units. Apr 24 23:55:00.774031 systemd[1]: iscsid.socket: Deactivated successfully. Apr 24 23:55:00.774072 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 23:55:00.774505 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 24 23:55:00.774544 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 23:55:00.774944 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 24 23:55:00.774993 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 24 23:55:00.775416 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 24 23:55:00.775452 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 24 23:55:00.776096 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 24 23:55:00.776392 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 24 23:55:00.777986 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 24 23:55:00.778592 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 24 23:55:00.778690 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 24 23:55:00.812488 systemd-networkd[904]: eth0: DHCPv6 lease lost Apr 24 23:55:00.818304 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 24 23:55:00.818433 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 24 23:55:00.834805 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 24 23:55:00.834922 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 24 23:55:00.839707 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 24 23:55:00.839817 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 24 23:55:00.849035 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 24 23:55:00.849124 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:55:00.923737 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 24 23:55:00.923842 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 24 23:55:00.940469 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 24 23:55:00.946142 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 24 23:55:00.946222 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 23:55:00.952329 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 24 23:55:00.952403 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:55:00.963303 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 24 23:55:00.963376 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 24 23:55:00.968856 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 24 23:55:00.971826 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:55:00.981723 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:55:01.005112 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 24 23:55:01.008204 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:55:01.008689 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 24 23:55:01.008736 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 24 23:55:01.008832 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 24 23:55:01.008865 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:55:01.009263 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 24 23:55:01.067946 kernel: hv_netvsc 7c1e521f-c7c2-7c1e-521f-c7c27c1e521f eth0: Data path switched from VF: enP64114s1 Apr 24 23:55:01.009303 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 24 23:55:01.010704 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 24 23:55:01.010748 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 24 23:55:01.011693 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 24 23:55:01.011730 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:55:01.032656 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 24 23:55:01.042026 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 24 23:55:01.042086 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:55:01.045325 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 24 23:55:01.045389 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:55:01.051628 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 24 23:55:01.051681 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:55:01.059266 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:55:01.059310 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:55:01.071154 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 24 23:55:01.073787 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 24 23:55:01.121974 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 24 23:55:01.122109 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 24 23:55:01.130180 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 24 23:55:01.140609 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 24 23:55:01.150412 systemd[1]: Switching root. Apr 24 23:55:01.230867 systemd-journald[177]: Journal stopped Apr 24 23:55:07.716581 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Apr 24 23:55:07.716613 kernel: SELinux: policy capability network_peer_controls=1 Apr 24 23:55:07.716628 kernel: SELinux: policy capability open_perms=1 Apr 24 23:55:07.716640 kernel: SELinux: policy capability extended_socket_class=1 Apr 24 23:55:07.716648 kernel: SELinux: policy capability always_check_network=0 Apr 24 23:55:07.716657 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 24 23:55:07.716666 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 24 23:55:07.716679 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 24 23:55:07.716692 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 24 23:55:07.716708 kernel: audit: type=1403 audit(1777074903.582:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 24 23:55:07.716719 systemd[1]: Successfully loaded SELinux policy in 131.732ms. Apr 24 23:55:07.716732 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.978ms. Apr 24 23:55:07.716744 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 24 23:55:07.716754 systemd[1]: Detected virtualization microsoft. Apr 24 23:55:07.716772 systemd[1]: Detected architecture x86-64. Apr 24 23:55:07.716782 systemd[1]: Detected first boot. Apr 24 23:55:07.716795 systemd[1]: Hostname set to . Apr 24 23:55:07.716807 systemd[1]: Initializing machine ID from random generator. Apr 24 23:55:07.716819 zram_generator::config[1188]: No configuration found. Apr 24 23:55:07.716833 systemd[1]: Populated /etc with preset unit settings. Apr 24 23:55:07.716845 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 24 23:55:07.716857 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 24 23:55:07.716866 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 24 23:55:07.716881 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 24 23:55:07.716891 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 24 23:55:07.716906 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 24 23:55:07.716919 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 24 23:55:07.716934 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 24 23:55:07.716944 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 24 23:55:07.716958 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 24 23:55:07.716969 systemd[1]: Created slice user.slice - User and Session Slice. Apr 24 23:55:07.716984 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:55:07.716994 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:55:07.717008 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 24 23:55:07.717022 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 24 23:55:07.717036 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 24 23:55:07.717046 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 23:55:07.717061 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 24 23:55:07.717071 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:55:07.717086 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 24 23:55:07.717099 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 24 23:55:07.717113 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 24 23:55:07.717124 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 24 23:55:07.717141 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:55:07.717151 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 23:55:07.717166 systemd[1]: Reached target slices.target - Slice Units. Apr 24 23:55:07.717176 systemd[1]: Reached target swap.target - Swaps. Apr 24 23:55:07.717190 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 24 23:55:07.717200 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 24 23:55:07.717214 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:55:07.717228 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 23:55:07.717245 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:55:07.717255 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 24 23:55:07.717270 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 24 23:55:07.717281 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 24 23:55:07.717298 systemd[1]: Mounting media.mount - External Media Directory... Apr 24 23:55:07.717309 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:55:07.717323 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 24 23:55:07.717334 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 24 23:55:07.717354 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 24 23:55:07.717367 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 24 23:55:07.717380 systemd[1]: Reached target machines.target - Containers. Apr 24 23:55:07.717393 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 24 23:55:07.717408 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:55:07.717422 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 23:55:07.717434 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 24 23:55:07.717449 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:55:07.717459 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 24 23:55:07.717469 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 23:55:07.717484 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 24 23:55:07.717499 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 23:55:07.717516 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 24 23:55:07.717531 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 24 23:55:07.717546 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 24 23:55:07.717562 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 24 23:55:07.717577 systemd[1]: Stopped systemd-fsck-usr.service. Apr 24 23:55:07.717600 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 23:55:07.717622 kernel: fuse: init (API version 7.39) Apr 24 23:55:07.717640 kernel: loop: module loaded Apr 24 23:55:07.717663 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 23:55:07.717691 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 24 23:55:07.717713 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 24 23:55:07.717734 kernel: ACPI: bus type drm_connector registered Apr 24 23:55:07.717754 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 23:55:07.717775 systemd[1]: verity-setup.service: Deactivated successfully. Apr 24 23:55:07.717800 systemd[1]: Stopped verity-setup.service. Apr 24 23:55:07.717824 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:55:07.717847 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 24 23:55:07.717874 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 24 23:55:07.717895 systemd[1]: Mounted media.mount - External Media Directory. Apr 24 23:55:07.717926 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 24 23:55:07.717947 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 24 23:55:07.717971 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 24 23:55:07.717993 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 24 23:55:07.718017 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:55:07.718071 systemd-journald[1287]: Collecting audit messages is disabled. Apr 24 23:55:07.718117 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 24 23:55:07.718138 systemd-journald[1287]: Journal started Apr 24 23:55:07.718174 systemd-journald[1287]: Runtime Journal (/run/log/journal/5d0a1e41087442afb526ab1404cfcee3) is 8.0M, max 158.7M, 150.7M free. Apr 24 23:55:06.872934 systemd[1]: Queued start job for default target multi-user.target. Apr 24 23:55:06.991921 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 24 23:55:06.992320 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 24 23:55:07.721969 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 24 23:55:07.730395 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 23:55:07.733932 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:55:07.734112 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:55:07.737974 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 24 23:55:07.738145 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 24 23:55:07.741892 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 23:55:07.742086 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 23:55:07.746610 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 24 23:55:07.746803 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 24 23:55:07.751033 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 23:55:07.751227 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 23:55:07.754918 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 24 23:55:07.759354 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 24 23:55:07.781254 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 23:55:07.787233 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 24 23:55:07.800397 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 24 23:55:07.811948 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 24 23:55:07.817699 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 24 23:55:07.817835 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 23:55:07.822317 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 24 23:55:07.827102 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 24 23:55:07.834469 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 24 23:55:07.838596 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:55:07.842522 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 24 23:55:07.847178 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 24 23:55:07.851035 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 23:55:07.852413 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 24 23:55:07.856605 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 23:55:07.858546 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 23:55:07.866436 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 24 23:55:07.871249 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 24 23:55:07.877134 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:55:07.887214 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 24 23:55:07.890905 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 24 23:55:07.900370 systemd-journald[1287]: Time spent on flushing to /var/log/journal/5d0a1e41087442afb526ab1404cfcee3 is 37.328ms for 957 entries. Apr 24 23:55:07.900370 systemd-journald[1287]: System Journal (/var/log/journal/5d0a1e41087442afb526ab1404cfcee3) is 8.0M, max 2.6G, 2.6G free. Apr 24 23:55:07.988595 systemd-journald[1287]: Received client request to flush runtime journal. Apr 24 23:55:07.988657 kernel: loop0: detected capacity change from 0 to 142488 Apr 24 23:55:07.900233 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 24 23:55:07.917550 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 24 23:55:07.928704 udevadm[1332]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 24 23:55:07.930603 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 24 23:55:07.935513 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 24 23:55:07.951500 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 24 23:55:07.991720 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 24 23:55:08.015753 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 24 23:55:08.016762 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 24 23:55:08.068296 systemd-tmpfiles[1325]: ACLs are not supported, ignoring. Apr 24 23:55:08.068326 systemd-tmpfiles[1325]: ACLs are not supported, ignoring. Apr 24 23:55:08.077007 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:55:08.092863 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 24 23:55:08.096510 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:55:08.245732 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 24 23:55:08.261035 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 23:55:08.283697 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. Apr 24 23:55:08.284109 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. Apr 24 23:55:08.297750 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:55:08.551383 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 24 23:55:08.614373 kernel: loop1: detected capacity change from 0 to 140768 Apr 24 23:55:09.020999 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 24 23:55:09.032589 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:55:09.067215 systemd-udevd[1350]: Using default interface naming scheme 'v255'. Apr 24 23:55:09.190370 kernel: loop2: detected capacity change from 0 to 228704 Apr 24 23:55:09.289373 kernel: loop3: detected capacity change from 0 to 31056 Apr 24 23:55:09.344881 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:55:09.360198 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 23:55:09.436986 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 24 23:55:09.466586 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 24 23:55:09.522375 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#88 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 24 23:55:09.535539 kernel: mousedev: PS/2 mouse device common for all mice Apr 24 23:55:09.596304 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 24 23:55:09.606372 kernel: hv_vmbus: registering driver hv_balloon Apr 24 23:55:09.606461 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Apr 24 23:55:09.629727 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:55:09.647375 kernel: hv_vmbus: registering driver hyperv_fb Apr 24 23:55:09.651375 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Apr 24 23:55:09.659364 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Apr 24 23:55:09.678402 kernel: Console: switching to colour dummy device 80x25 Apr 24 23:55:09.679657 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:55:09.679870 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:55:09.693005 kernel: Console: switching to colour frame buffer device 128x48 Apr 24 23:55:09.712758 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:55:09.734437 kernel: loop4: detected capacity change from 0 to 142488 Apr 24 23:55:09.832375 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1370) Apr 24 23:55:09.890765 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:55:09.890988 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:55:09.903518 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:55:09.952626 kernel: loop5: detected capacity change from 0 to 140768 Apr 24 23:55:09.957421 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 24 23:55:09.967074 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 24 23:55:10.005377 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Apr 24 23:55:10.021157 systemd-networkd[1362]: lo: Link UP Apr 24 23:55:10.022489 systemd-networkd[1362]: lo: Gained carrier Apr 24 23:55:10.025785 systemd-networkd[1362]: Enumeration completed Apr 24 23:55:10.025968 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 23:55:10.031851 systemd-networkd[1362]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:55:10.031856 systemd-networkd[1362]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 23:55:10.035524 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 24 23:55:10.056762 kernel: loop6: detected capacity change from 0 to 228704 Apr 24 23:55:10.064133 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 24 23:55:10.109119 kernel: mlx5_core fa72:00:02.0 enP64114s1: Link up Apr 24 23:55:10.121438 kernel: loop7: detected capacity change from 0 to 31056 Apr 24 23:55:10.131815 kernel: hv_netvsc 7c1e521f-c7c2-7c1e-521f-c7c27c1e521f eth0: Data path switched to VF: enP64114s1 Apr 24 23:55:10.133540 systemd-networkd[1362]: enP64114s1: Link UP Apr 24 23:55:10.133875 systemd-networkd[1362]: eth0: Link UP Apr 24 23:55:10.133884 systemd-networkd[1362]: eth0: Gained carrier Apr 24 23:55:10.133909 systemd-networkd[1362]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:55:10.142609 systemd-networkd[1362]: enP64114s1: Gained carrier Apr 24 23:55:10.180399 systemd-networkd[1362]: eth0: DHCPv4 address 10.0.0.29/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 24 23:55:10.182413 (sd-merge)[1411]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Apr 24 23:55:10.183024 (sd-merge)[1411]: Merged extensions into '/usr'. Apr 24 23:55:10.187157 systemd[1]: Reloading requested from client PID 1324 ('systemd-sysext') (unit systemd-sysext.service)... Apr 24 23:55:10.187177 systemd[1]: Reloading... Apr 24 23:55:10.259557 zram_generator::config[1475]: No configuration found. Apr 24 23:55:10.454627 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:55:10.532283 systemd[1]: Reloading finished in 344 ms. Apr 24 23:55:10.561714 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 24 23:55:10.565882 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 24 23:55:10.569985 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:55:10.584544 systemd[1]: Starting ensure-sysext.service... Apr 24 23:55:10.589526 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 24 23:55:10.605540 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 23:55:10.612860 systemd[1]: Reloading requested from client PID 1542 ('systemctl') (unit ensure-sysext.service)... Apr 24 23:55:10.612882 systemd[1]: Reloading... Apr 24 23:55:10.628225 systemd-tmpfiles[1544]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 24 23:55:10.629220 systemd-tmpfiles[1544]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 24 23:55:10.630766 systemd-tmpfiles[1544]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 24 23:55:10.631317 systemd-tmpfiles[1544]: ACLs are not supported, ignoring. Apr 24 23:55:10.631502 systemd-tmpfiles[1544]: ACLs are not supported, ignoring. Apr 24 23:55:10.651938 systemd-tmpfiles[1544]: Detected autofs mount point /boot during canonicalization of boot. Apr 24 23:55:10.652100 systemd-tmpfiles[1544]: Skipping /boot Apr 24 23:55:10.677009 systemd-tmpfiles[1544]: Detected autofs mount point /boot during canonicalization of boot. Apr 24 23:55:10.677401 systemd-tmpfiles[1544]: Skipping /boot Apr 24 23:55:10.705378 zram_generator::config[1572]: No configuration found. Apr 24 23:55:10.707374 lvm[1543]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 24 23:55:10.868441 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:55:10.946654 systemd[1]: Reloading finished in 333 ms. Apr 24 23:55:10.969961 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 24 23:55:10.974516 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:55:10.985414 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:55:10.992628 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 24 23:55:11.012643 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 24 23:55:11.028645 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 24 23:55:11.041476 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 24 23:55:11.046174 lvm[1645]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 24 23:55:11.049632 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 23:55:11.056656 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 24 23:55:11.065636 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:55:11.065933 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:55:11.073849 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:55:11.089861 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 23:55:11.096720 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 23:55:11.101775 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:55:11.101952 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:55:11.103001 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 24 23:55:11.107994 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:55:11.108180 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:55:11.112241 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 23:55:11.112445 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 23:55:11.116894 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 23:55:11.117013 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 23:55:11.131024 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 24 23:55:11.139513 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:55:11.140103 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:55:11.147436 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:55:11.153844 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 24 23:55:11.165622 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 23:55:11.178641 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 23:55:11.182062 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:55:11.182334 systemd[1]: Reached target time-set.target - System Time Set. Apr 24 23:55:11.187810 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:55:11.189332 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:55:11.190729 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:55:11.207562 systemd[1]: Finished ensure-sysext.service. Apr 24 23:55:11.210749 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 24 23:55:11.210927 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 24 23:55:11.214242 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 23:55:11.214454 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 23:55:11.218788 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 23:55:11.218987 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 23:55:11.225281 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 23:55:11.225440 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 23:55:11.239363 augenrules[1672]: No rules Apr 24 23:55:11.239874 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 24 23:55:11.243961 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 24 23:55:11.266775 systemd-resolved[1647]: Positive Trust Anchors: Apr 24 23:55:11.266790 systemd-resolved[1647]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 23:55:11.266836 systemd-resolved[1647]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 23:55:11.295528 systemd-resolved[1647]: Using system hostname 'ci-4081.3.6-n-b07cc1dc35'. Apr 24 23:55:11.297379 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 23:55:11.300882 systemd[1]: Reached target network.target - Network. Apr 24 23:55:11.303643 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:55:11.766293 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 24 23:55:11.770431 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 24 23:55:12.147762 systemd-networkd[1362]: eth0: Gained IPv6LL Apr 24 23:55:12.151069 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 24 23:55:12.155620 systemd[1]: Reached target network-online.target - Network is Online. Apr 24 23:55:19.824529 ldconfig[1319]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 24 23:55:20.131075 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 24 23:55:20.139598 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 24 23:55:20.174377 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 24 23:55:20.178405 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 23:55:20.183516 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 24 23:55:20.187607 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 24 23:55:20.191889 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 24 23:55:20.195445 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 24 23:55:20.199161 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 24 23:55:20.203365 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 24 23:55:20.203418 systemd[1]: Reached target paths.target - Path Units. Apr 24 23:55:20.206288 systemd[1]: Reached target timers.target - Timer Units. Apr 24 23:55:20.216819 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 24 23:55:20.222101 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 24 23:55:20.232526 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 24 23:55:20.236586 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 24 23:55:20.239927 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 23:55:20.242794 systemd[1]: Reached target basic.target - Basic System. Apr 24 23:55:20.245553 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 24 23:55:20.245596 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 24 23:55:20.425499 systemd[1]: Starting chronyd.service - NTP client/server... Apr 24 23:55:20.432481 systemd[1]: Starting containerd.service - containerd container runtime... Apr 24 23:55:20.443507 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 24 23:55:20.453605 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 24 23:55:20.459425 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 24 23:55:20.470705 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 24 23:55:20.474245 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 24 23:55:20.474306 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Apr 24 23:55:20.477956 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Apr 24 23:55:20.482859 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Apr 24 23:55:20.486967 jq[1693]: false Apr 24 23:55:20.488523 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:55:20.498583 (chronyd)[1689]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Apr 24 23:55:20.500532 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 24 23:55:20.509518 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 24 23:55:20.516491 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 24 23:55:20.522522 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 24 23:55:20.532561 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 24 23:55:20.543816 extend-filesystems[1694]: Found loop4 Apr 24 23:55:20.546585 extend-filesystems[1694]: Found loop5 Apr 24 23:55:20.546585 extend-filesystems[1694]: Found loop6 Apr 24 23:55:20.546585 extend-filesystems[1694]: Found loop7 Apr 24 23:55:20.546585 extend-filesystems[1694]: Found sda Apr 24 23:55:20.546585 extend-filesystems[1694]: Found sda1 Apr 24 23:55:20.546585 extend-filesystems[1694]: Found sda2 Apr 24 23:55:20.546585 extend-filesystems[1694]: Found sda3 Apr 24 23:55:20.546585 extend-filesystems[1694]: Found usr Apr 24 23:55:20.546585 extend-filesystems[1694]: Found sda4 Apr 24 23:55:20.546585 extend-filesystems[1694]: Found sda6 Apr 24 23:55:20.546585 extend-filesystems[1694]: Found sda7 Apr 24 23:55:20.546585 extend-filesystems[1694]: Found sda9 Apr 24 23:55:20.546585 extend-filesystems[1694]: Checking size of /dev/sda9 Apr 24 23:55:20.598790 kernel: hv_utils: KVP IC version 4.0 Apr 24 23:55:20.544601 chronyd[1713]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Apr 24 23:55:20.554955 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 24 23:55:20.553446 KVP[1695]: KVP starting; pid is:1695 Apr 24 23:55:20.558949 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 24 23:55:20.598026 KVP[1695]: KVP LIC Version: 3.1 Apr 24 23:55:20.559532 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 24 23:55:20.569840 systemd[1]: Starting update-engine.service - Update Engine... Apr 24 23:55:20.586461 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 24 23:55:20.607210 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 24 23:55:20.607467 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 24 23:55:20.610569 jq[1717]: true Apr 24 23:55:20.610990 systemd[1]: motdgen.service: Deactivated successfully. Apr 24 23:55:20.611335 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 24 23:55:20.618981 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 24 23:55:20.619187 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 24 23:55:20.647102 (ntainerd)[1723]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 24 23:55:20.656119 jq[1722]: true Apr 24 23:55:20.931488 dbus-daemon[1692]: [system] SELinux support is enabled Apr 24 23:55:20.931720 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 24 23:55:20.935432 chronyd[1713]: Timezone right/UTC failed leap second check, ignoring Apr 24 23:55:20.935651 chronyd[1713]: Loaded seccomp filter (level 2) Apr 24 23:55:20.942027 extend-filesystems[1694]: Old size kept for /dev/sda9 Apr 24 23:55:20.945292 extend-filesystems[1694]: Found sr0 Apr 24 23:55:20.957544 systemd[1]: Started chronyd.service - NTP client/server. Apr 24 23:55:20.961090 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 24 23:55:20.963259 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 24 23:55:20.969423 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 24 23:55:20.969491 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 24 23:55:20.975105 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 24 23:55:20.975139 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 24 23:55:21.001960 tar[1721]: linux-amd64/LICENSE Apr 24 23:55:21.002276 tar[1721]: linux-amd64/helm Apr 24 23:55:21.013803 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 24 23:55:21.062573 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1770) Apr 24 23:55:21.070371 update_engine[1715]: I20260424 23:55:21.068040 1715 main.cc:92] Flatcar Update Engine starting Apr 24 23:55:21.075375 systemd[1]: Started update-engine.service - Update Engine. Apr 24 23:55:21.087562 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 24 23:55:21.091043 update_engine[1715]: I20260424 23:55:21.090981 1715 update_check_scheduler.cc:74] Next update check in 10m5s Apr 24 23:55:21.094546 systemd-logind[1712]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 24 23:55:21.097536 systemd-logind[1712]: New seat seat0. Apr 24 23:55:21.100423 systemd[1]: Started systemd-logind.service - User Login Management. Apr 24 23:55:21.262654 coreos-metadata[1691]: Apr 24 23:55:21.262 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 24 23:55:21.264857 coreos-metadata[1691]: Apr 24 23:55:21.264 INFO Fetch successful Apr 24 23:55:21.264857 coreos-metadata[1691]: Apr 24 23:55:21.264 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Apr 24 23:55:21.272029 coreos-metadata[1691]: Apr 24 23:55:21.272 INFO Fetch successful Apr 24 23:55:21.272029 coreos-metadata[1691]: Apr 24 23:55:21.272 INFO Fetching http://168.63.129.16/machine/b415822d-721f-4bab-bc8e-d55ff60304c7/d73b016e%2D2eef%2D48af%2Db3f2%2Db5fbd2b8e8f7.%5Fci%2D4081.3.6%2Dn%2Db07cc1dc35?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Apr 24 23:55:21.273504 coreos-metadata[1691]: Apr 24 23:55:21.273 INFO Fetch successful Apr 24 23:55:21.273805 coreos-metadata[1691]: Apr 24 23:55:21.273 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Apr 24 23:55:21.285380 coreos-metadata[1691]: Apr 24 23:55:21.283 INFO Fetch successful Apr 24 23:55:21.336005 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 24 23:55:21.340630 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 24 23:55:21.390971 bash[1747]: Updated "/home/core/.ssh/authorized_keys" Apr 24 23:55:21.394099 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 24 23:55:21.402104 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 24 23:55:21.925436 tar[1721]: linux-amd64/README.md Apr 24 23:55:21.941136 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 24 23:55:22.092965 sshd_keygen[1751]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 24 23:55:22.113514 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:55:22.123608 (kubelet)[1824]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:55:22.126020 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 24 23:55:22.136792 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 24 23:55:22.141572 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Apr 24 23:55:22.160394 locksmithd[1778]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 24 23:55:22.160633 systemd[1]: issuegen.service: Deactivated successfully. Apr 24 23:55:22.160867 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 24 23:55:22.175409 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 24 23:55:22.206592 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Apr 24 23:55:22.401842 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 24 23:55:22.417125 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 24 23:55:22.425493 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 24 23:55:22.429190 systemd[1]: Reached target getty.target - Login Prompts. Apr 24 23:55:22.753086 kubelet[1824]: E0424 23:55:22.752962 1824 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:55:22.755758 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:55:22.755969 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:55:24.314156 containerd[1723]: time="2026-04-24T23:55:24.314056600Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 24 23:55:24.341461 containerd[1723]: time="2026-04-24T23:55:24.341393100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:55:24.343083 containerd[1723]: time="2026-04-24T23:55:24.343036000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:55:24.343083 containerd[1723]: time="2026-04-24T23:55:24.343073700Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 24 23:55:24.343220 containerd[1723]: time="2026-04-24T23:55:24.343097300Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 24 23:55:24.343334 containerd[1723]: time="2026-04-24T23:55:24.343305900Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 24 23:55:24.343399 containerd[1723]: time="2026-04-24T23:55:24.343338100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 24 23:55:24.343464 containerd[1723]: time="2026-04-24T23:55:24.343441100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:55:24.343503 containerd[1723]: time="2026-04-24T23:55:24.343461800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:55:24.343696 containerd[1723]: time="2026-04-24T23:55:24.343670500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:55:24.343696 containerd[1723]: time="2026-04-24T23:55:24.343692800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 24 23:55:24.343800 containerd[1723]: time="2026-04-24T23:55:24.343713000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:55:24.343800 containerd[1723]: time="2026-04-24T23:55:24.343726700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 24 23:55:24.343934 containerd[1723]: time="2026-04-24T23:55:24.343825900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:55:24.344112 containerd[1723]: time="2026-04-24T23:55:24.344082900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:55:24.344259 containerd[1723]: time="2026-04-24T23:55:24.344233800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:55:24.344307 containerd[1723]: time="2026-04-24T23:55:24.344256800Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 24 23:55:24.344407 containerd[1723]: time="2026-04-24T23:55:24.344385400Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 24 23:55:24.344474 containerd[1723]: time="2026-04-24T23:55:24.344452200Z" level=info msg="metadata content store policy set" policy=shared Apr 24 23:55:24.689692 containerd[1723]: time="2026-04-24T23:55:24.689288900Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 24 23:55:24.689692 containerd[1723]: time="2026-04-24T23:55:24.689407300Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 24 23:55:24.689692 containerd[1723]: time="2026-04-24T23:55:24.689432000Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 24 23:55:24.689692 containerd[1723]: time="2026-04-24T23:55:24.689454300Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 24 23:55:24.689692 containerd[1723]: time="2026-04-24T23:55:24.689475400Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 24 23:55:24.689692 containerd[1723]: time="2026-04-24T23:55:24.689691800Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 24 23:55:24.690030 containerd[1723]: time="2026-04-24T23:55:24.690012700Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 24 23:55:24.690188 containerd[1723]: time="2026-04-24T23:55:24.690159600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 24 23:55:24.690188 containerd[1723]: time="2026-04-24T23:55:24.690187300Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 24 23:55:24.690292 containerd[1723]: time="2026-04-24T23:55:24.690205500Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 24 23:55:24.690292 containerd[1723]: time="2026-04-24T23:55:24.690226100Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 24 23:55:24.690292 containerd[1723]: time="2026-04-24T23:55:24.690244800Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 24 23:55:24.690292 containerd[1723]: time="2026-04-24T23:55:24.690263300Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 24 23:55:24.690292 containerd[1723]: time="2026-04-24T23:55:24.690283900Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 24 23:55:24.690476 containerd[1723]: time="2026-04-24T23:55:24.690306100Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 24 23:55:24.690476 containerd[1723]: time="2026-04-24T23:55:24.690326500Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 24 23:55:24.690476 containerd[1723]: time="2026-04-24T23:55:24.690373100Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 24 23:55:24.690476 containerd[1723]: time="2026-04-24T23:55:24.690396200Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 24 23:55:24.690476 containerd[1723]: time="2026-04-24T23:55:24.690427700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 24 23:55:24.690476 containerd[1723]: time="2026-04-24T23:55:24.690447500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 24 23:55:24.690476 containerd[1723]: time="2026-04-24T23:55:24.690466700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 24 23:55:24.690711 containerd[1723]: time="2026-04-24T23:55:24.690485400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 24 23:55:24.690711 containerd[1723]: time="2026-04-24T23:55:24.690503300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 24 23:55:24.690711 containerd[1723]: time="2026-04-24T23:55:24.690523700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 24 23:55:24.690711 containerd[1723]: time="2026-04-24T23:55:24.690540900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 24 23:55:24.690711 containerd[1723]: time="2026-04-24T23:55:24.690558900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 24 23:55:24.690711 containerd[1723]: time="2026-04-24T23:55:24.690579600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 24 23:55:24.690711 containerd[1723]: time="2026-04-24T23:55:24.690601000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 24 23:55:24.690711 containerd[1723]: time="2026-04-24T23:55:24.690617200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 24 23:55:24.690711 containerd[1723]: time="2026-04-24T23:55:24.690635800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 24 23:55:24.690711 containerd[1723]: time="2026-04-24T23:55:24.690654100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 24 23:55:24.690711 containerd[1723]: time="2026-04-24T23:55:24.690690000Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 24 23:55:24.691129 containerd[1723]: time="2026-04-24T23:55:24.690721400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 24 23:55:24.691129 containerd[1723]: time="2026-04-24T23:55:24.690738700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 24 23:55:24.691129 containerd[1723]: time="2026-04-24T23:55:24.690768000Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 24 23:55:24.691129 containerd[1723]: time="2026-04-24T23:55:24.690865100Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 24 23:55:24.691129 containerd[1723]: time="2026-04-24T23:55:24.690893600Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 24 23:55:24.691129 containerd[1723]: time="2026-04-24T23:55:24.690909600Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 24 23:55:24.691129 containerd[1723]: time="2026-04-24T23:55:24.690929000Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 24 23:55:24.691129 containerd[1723]: time="2026-04-24T23:55:24.690943500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 24 23:55:24.691129 containerd[1723]: time="2026-04-24T23:55:24.690960900Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 24 23:55:24.691129 containerd[1723]: time="2026-04-24T23:55:24.690974100Z" level=info msg="NRI interface is disabled by configuration." Apr 24 23:55:24.691129 containerd[1723]: time="2026-04-24T23:55:24.690987300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 24 23:55:24.691640 containerd[1723]: time="2026-04-24T23:55:24.691414200Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 24 23:55:24.691640 containerd[1723]: time="2026-04-24T23:55:24.691596700Z" level=info msg="Connect containerd service" Apr 24 23:55:24.691895 containerd[1723]: time="2026-04-24T23:55:24.691723300Z" level=info msg="using legacy CRI server" Apr 24 23:55:24.691895 containerd[1723]: time="2026-04-24T23:55:24.691738600Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 24 23:55:24.691895 containerd[1723]: time="2026-04-24T23:55:24.691889500Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 24 23:55:24.692885 containerd[1723]: time="2026-04-24T23:55:24.692743600Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 24 23:55:24.692966 containerd[1723]: time="2026-04-24T23:55:24.692913800Z" level=info msg="Start subscribing containerd event" Apr 24 23:55:24.693013 containerd[1723]: time="2026-04-24T23:55:24.692979500Z" level=info msg="Start recovering state" Apr 24 23:55:24.693121 containerd[1723]: time="2026-04-24T23:55:24.693088900Z" level=info msg="Start event monitor" Apr 24 23:55:24.693121 containerd[1723]: time="2026-04-24T23:55:24.693116000Z" level=info msg="Start snapshots syncer" Apr 24 23:55:24.693192 containerd[1723]: time="2026-04-24T23:55:24.693129500Z" level=info msg="Start cni network conf syncer for default" Apr 24 23:55:24.693192 containerd[1723]: time="2026-04-24T23:55:24.693141400Z" level=info msg="Start streaming server" Apr 24 23:55:24.693545 containerd[1723]: time="2026-04-24T23:55:24.693515200Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 24 23:55:24.693614 containerd[1723]: time="2026-04-24T23:55:24.693576900Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 24 23:55:24.693742 systemd[1]: Started containerd.service - containerd container runtime. Apr 24 23:55:24.697759 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 24 23:55:24.702690 containerd[1723]: time="2026-04-24T23:55:24.702371700Z" level=info msg="containerd successfully booted in 0.389168s" Apr 24 23:55:24.705334 systemd[1]: Startup finished in 1.022s (kernel) + 11.286s (initrd) + 21.252s (userspace) = 33.560s. Apr 24 23:55:27.476215 login[1852]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Apr 24 23:55:27.489882 login[1851]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 24 23:55:27.532577 systemd-logind[1712]: New session 1 of user core. Apr 24 23:55:27.533577 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 24 23:55:27.540625 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 24 23:55:27.583609 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 24 23:55:27.591810 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 24 23:55:27.618171 (systemd)[1868]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 24 23:55:28.077927 systemd[1868]: Queued start job for default target default.target. Apr 24 23:55:28.086498 systemd[1868]: Created slice app.slice - User Application Slice. Apr 24 23:55:28.086539 systemd[1868]: Reached target paths.target - Paths. Apr 24 23:55:28.086559 systemd[1868]: Reached target timers.target - Timers. Apr 24 23:55:28.089490 systemd[1868]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 24 23:55:28.107121 systemd[1868]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 24 23:55:28.107259 systemd[1868]: Reached target sockets.target - Sockets. Apr 24 23:55:28.107279 systemd[1868]: Reached target basic.target - Basic System. Apr 24 23:55:28.107323 systemd[1868]: Reached target default.target - Main User Target. Apr 24 23:55:28.107383 systemd[1868]: Startup finished in 436ms. Apr 24 23:55:28.107804 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 24 23:55:28.118523 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 24 23:55:28.478063 login[1852]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 24 23:55:28.484108 systemd-logind[1712]: New session 2 of user core. Apr 24 23:55:28.489526 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 24 23:55:28.647607 waagent[1845]: 2026-04-24T23:55:28.647494Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Apr 24 23:55:28.651270 waagent[1845]: 2026-04-24T23:55:28.651197Z INFO Daemon Daemon OS: flatcar 4081.3.6 Apr 24 23:55:28.654285 waagent[1845]: 2026-04-24T23:55:28.654226Z INFO Daemon Daemon Python: 3.11.9 Apr 24 23:55:28.657028 waagent[1845]: 2026-04-24T23:55:28.656958Z INFO Daemon Daemon Run daemon Apr 24 23:55:28.659531 waagent[1845]: 2026-04-24T23:55:28.659483Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Apr 24 23:55:28.666364 waagent[1845]: 2026-04-24T23:55:28.664592Z INFO Daemon Daemon Using waagent for provisioning Apr 24 23:55:28.666364 waagent[1845]: 2026-04-24T23:55:28.664952Z INFO Daemon Daemon Activate resource disk Apr 24 23:55:28.666364 waagent[1845]: 2026-04-24T23:55:28.665295Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Apr 24 23:55:28.670024 waagent[1845]: 2026-04-24T23:55:28.669977Z INFO Daemon Daemon Found device: None Apr 24 23:55:28.670996 waagent[1845]: 2026-04-24T23:55:28.670955Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Apr 24 23:55:28.671511 waagent[1845]: 2026-04-24T23:55:28.671479Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Apr 24 23:55:28.674333 waagent[1845]: 2026-04-24T23:55:28.674283Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 24 23:55:28.675310 waagent[1845]: 2026-04-24T23:55:28.675270Z INFO Daemon Daemon Running default provisioning handler Apr 24 23:55:28.685284 waagent[1845]: 2026-04-24T23:55:28.685229Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Apr 24 23:55:28.687001 waagent[1845]: 2026-04-24T23:55:28.686954Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Apr 24 23:55:28.687847 waagent[1845]: 2026-04-24T23:55:28.687809Z INFO Daemon Daemon cloud-init is enabled: False Apr 24 23:55:28.688308 waagent[1845]: 2026-04-24T23:55:28.688270Z INFO Daemon Daemon Copying ovf-env.xml Apr 24 23:55:28.891299 waagent[1845]: 2026-04-24T23:55:28.891192Z INFO Daemon Daemon Successfully mounted dvd Apr 24 23:55:28.919071 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Apr 24 23:55:28.922230 waagent[1845]: 2026-04-24T23:55:28.922156Z INFO Daemon Daemon Detect protocol endpoint Apr 24 23:55:28.940478 waagent[1845]: 2026-04-24T23:55:28.922512Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 24 23:55:28.940478 waagent[1845]: 2026-04-24T23:55:28.923593Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Apr 24 23:55:28.940478 waagent[1845]: 2026-04-24T23:55:28.924283Z INFO Daemon Daemon Test for route to 168.63.129.16 Apr 24 23:55:28.940478 waagent[1845]: 2026-04-24T23:55:28.924914Z INFO Daemon Daemon Route to 168.63.129.16 exists Apr 24 23:55:28.940478 waagent[1845]: 2026-04-24T23:55:28.925273Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Apr 24 23:55:28.956731 waagent[1845]: 2026-04-24T23:55:28.956676Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Apr 24 23:55:28.965439 waagent[1845]: 2026-04-24T23:55:28.957142Z INFO Daemon Daemon Wire protocol version:2012-11-30 Apr 24 23:55:28.965439 waagent[1845]: 2026-04-24T23:55:28.957959Z INFO Daemon Daemon Server preferred version:2015-04-05 Apr 24 23:55:29.053774 waagent[1845]: 2026-04-24T23:55:29.053658Z INFO Daemon Daemon Initializing goal state during protocol detection Apr 24 23:55:29.059083 waagent[1845]: 2026-04-24T23:55:29.058998Z INFO Daemon Daemon Forcing an update of the goal state. Apr 24 23:55:29.065611 waagent[1845]: 2026-04-24T23:55:29.065553Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Apr 24 23:55:29.083545 waagent[1845]: 2026-04-24T23:55:29.083491Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.181 Apr 24 23:55:29.105512 waagent[1845]: 2026-04-24T23:55:29.084150Z INFO Daemon Apr 24 23:55:29.105512 waagent[1845]: 2026-04-24T23:55:29.084291Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: bc12ceb1-0da9-4c4f-9c13-3bd05bd635fe eTag: 7866287017488570865 source: Fabric] Apr 24 23:55:29.105512 waagent[1845]: 2026-04-24T23:55:29.085106Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Apr 24 23:55:29.105512 waagent[1845]: 2026-04-24T23:55:29.086392Z INFO Daemon Apr 24 23:55:29.105512 waagent[1845]: 2026-04-24T23:55:29.087324Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Apr 24 23:55:29.105512 waagent[1845]: 2026-04-24T23:55:29.091376Z INFO Daemon Daemon Downloading artifacts profile blob Apr 24 23:55:29.158449 waagent[1845]: 2026-04-24T23:55:29.158294Z INFO Daemon Downloaded certificate {'thumbprint': '891701EF834A9E6AA7196CFE51638035D7AC1613', 'hasPrivateKey': True} Apr 24 23:55:29.165777 waagent[1845]: 2026-04-24T23:55:29.159132Z INFO Daemon Fetch goal state completed Apr 24 23:55:29.192887 waagent[1845]: 2026-04-24T23:55:29.192801Z INFO Daemon Daemon Starting provisioning Apr 24 23:55:29.205638 waagent[1845]: 2026-04-24T23:55:29.193137Z INFO Daemon Daemon Handle ovf-env.xml. Apr 24 23:55:29.205638 waagent[1845]: 2026-04-24T23:55:29.194389Z INFO Daemon Daemon Set hostname [ci-4081.3.6-n-b07cc1dc35] Apr 24 23:55:29.205638 waagent[1845]: 2026-04-24T23:55:29.198114Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-n-b07cc1dc35] Apr 24 23:55:29.205638 waagent[1845]: 2026-04-24T23:55:29.199536Z INFO Daemon Daemon Examine /proc/net/route for primary interface Apr 24 23:55:29.205638 waagent[1845]: 2026-04-24T23:55:29.200117Z INFO Daemon Daemon Primary interface is [eth0] Apr 24 23:55:29.226367 systemd-networkd[1362]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:55:29.226377 systemd-networkd[1362]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 23:55:29.226425 systemd-networkd[1362]: eth0: DHCP lease lost Apr 24 23:55:29.227706 waagent[1845]: 2026-04-24T23:55:29.227529Z INFO Daemon Daemon Create user account if not exists Apr 24 23:55:29.230906 waagent[1845]: 2026-04-24T23:55:29.230809Z INFO Daemon Daemon User core already exists, skip useradd Apr 24 23:55:29.246483 waagent[1845]: 2026-04-24T23:55:29.231012Z INFO Daemon Daemon Configure sudoer Apr 24 23:55:29.246483 waagent[1845]: 2026-04-24T23:55:29.232312Z INFO Daemon Daemon Configure sshd Apr 24 23:55:29.246483 waagent[1845]: 2026-04-24T23:55:29.232744Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Apr 24 23:55:29.246483 waagent[1845]: 2026-04-24T23:55:29.233051Z INFO Daemon Daemon Deploy ssh public key. Apr 24 23:55:29.246547 systemd-networkd[1362]: eth0: DHCPv6 lease lost Apr 24 23:55:29.272387 systemd-networkd[1362]: eth0: DHCPv4 address 10.0.0.29/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 24 23:55:32.834745 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 24 23:55:32.839918 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:55:32.958791 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:55:32.972680 (kubelet)[1926]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:55:33.751677 kubelet[1926]: E0424 23:55:33.751592 1926 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:55:33.755590 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:55:33.755810 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:55:43.834563 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 24 23:55:43.839580 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:55:44.207065 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:55:44.217658 (kubelet)[1941]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:55:44.585659 kubelet[1941]: E0424 23:55:44.585601 1941 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:55:44.588384 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:55:44.588589 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:55:44.736097 chronyd[1713]: Selected source PHC0 Apr 24 23:55:54.834683 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 24 23:55:54.841574 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:55:55.210156 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:55:55.215027 (kubelet)[1956]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:55:55.602544 kubelet[1956]: E0424 23:55:55.602488 1956 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:55:55.605173 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:55:55.605402 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:55:57.763013 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Apr 24 23:55:59.300554 waagent[1845]: 2026-04-24T23:55:59.300482Z INFO Daemon Daemon Provisioning complete Apr 24 23:55:59.315330 waagent[1845]: 2026-04-24T23:55:59.315260Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Apr 24 23:55:59.323430 waagent[1845]: 2026-04-24T23:55:59.315639Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Apr 24 23:55:59.323430 waagent[1845]: 2026-04-24T23:55:59.316159Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Apr 24 23:55:59.442633 waagent[1963]: 2026-04-24T23:55:59.442523Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Apr 24 23:55:59.443109 waagent[1963]: 2026-04-24T23:55:59.442696Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Apr 24 23:55:59.443109 waagent[1963]: 2026-04-24T23:55:59.442781Z INFO ExtHandler ExtHandler Python: 3.11.9 Apr 24 23:55:59.485954 waagent[1963]: 2026-04-24T23:55:59.485862Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Apr 24 23:55:59.486190 waagent[1963]: 2026-04-24T23:55:59.486136Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 24 23:55:59.486294 waagent[1963]: 2026-04-24T23:55:59.486249Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 24 23:55:59.493440 waagent[1963]: 2026-04-24T23:55:59.493379Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Apr 24 23:55:59.502413 waagent[1963]: 2026-04-24T23:55:59.502360Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.181 Apr 24 23:55:59.502863 waagent[1963]: 2026-04-24T23:55:59.502813Z INFO ExtHandler Apr 24 23:55:59.502941 waagent[1963]: 2026-04-24T23:55:59.502910Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 6c8923f9-73f2-4c5a-a85b-c181e85e686b eTag: 7866287017488570865 source: Fabric] Apr 24 23:55:59.503247 waagent[1963]: 2026-04-24T23:55:59.503201Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Apr 24 23:55:59.503851 waagent[1963]: 2026-04-24T23:55:59.503795Z INFO ExtHandler Apr 24 23:55:59.503924 waagent[1963]: 2026-04-24T23:55:59.503882Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Apr 24 23:55:59.507578 waagent[1963]: 2026-04-24T23:55:59.507539Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Apr 24 23:55:59.645319 waagent[1963]: 2026-04-24T23:55:59.645217Z INFO ExtHandler Downloaded certificate {'thumbprint': '891701EF834A9E6AA7196CFE51638035D7AC1613', 'hasPrivateKey': True} Apr 24 23:55:59.645896 waagent[1963]: 2026-04-24T23:55:59.645837Z INFO ExtHandler Fetch goal state completed Apr 24 23:55:59.661088 waagent[1963]: 2026-04-24T23:55:59.661025Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1963 Apr 24 23:55:59.661255 waagent[1963]: 2026-04-24T23:55:59.661203Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Apr 24 23:55:59.662816 waagent[1963]: 2026-04-24T23:55:59.662756Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Apr 24 23:55:59.663183 waagent[1963]: 2026-04-24T23:55:59.663132Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Apr 24 23:55:59.700404 waagent[1963]: 2026-04-24T23:55:59.700333Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Apr 24 23:55:59.700667 waagent[1963]: 2026-04-24T23:55:59.700618Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Apr 24 23:55:59.707540 waagent[1963]: 2026-04-24T23:55:59.707502Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Apr 24 23:55:59.714910 systemd[1]: Reloading requested from client PID 1976 ('systemctl') (unit waagent.service)... Apr 24 23:55:59.714929 systemd[1]: Reloading... Apr 24 23:55:59.807372 zram_generator::config[2009]: No configuration found. Apr 24 23:55:59.933013 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:56:00.014734 systemd[1]: Reloading finished in 299 ms. Apr 24 23:56:00.042725 waagent[1963]: 2026-04-24T23:56:00.042588Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Apr 24 23:56:00.050022 systemd[1]: Reloading requested from client PID 2067 ('systemctl') (unit waagent.service)... Apr 24 23:56:00.050041 systemd[1]: Reloading... Apr 24 23:56:00.143433 zram_generator::config[2101]: No configuration found. Apr 24 23:56:00.259956 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:56:00.341670 systemd[1]: Reloading finished in 291 ms. Apr 24 23:56:00.369581 waagent[1963]: 2026-04-24T23:56:00.368498Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Apr 24 23:56:00.369581 waagent[1963]: 2026-04-24T23:56:00.368713Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Apr 24 23:56:00.784030 waagent[1963]: 2026-04-24T23:56:00.783925Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Apr 24 23:56:00.784765 waagent[1963]: 2026-04-24T23:56:00.784701Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Apr 24 23:56:00.785594 waagent[1963]: 2026-04-24T23:56:00.785531Z INFO ExtHandler ExtHandler Starting env monitor service. Apr 24 23:56:00.785748 waagent[1963]: 2026-04-24T23:56:00.785677Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 24 23:56:00.785933 waagent[1963]: 2026-04-24T23:56:00.785883Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 24 23:56:00.786509 waagent[1963]: 2026-04-24T23:56:00.786324Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Apr 24 23:56:00.786594 waagent[1963]: 2026-04-24T23:56:00.786545Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Apr 24 23:56:00.786930 waagent[1963]: 2026-04-24T23:56:00.786885Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 24 23:56:00.787260 waagent[1963]: 2026-04-24T23:56:00.787208Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 24 23:56:00.787391 waagent[1963]: 2026-04-24T23:56:00.787321Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Apr 24 23:56:00.787391 waagent[1963]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Apr 24 23:56:00.787391 waagent[1963]: eth0 00000000 0100000A 0003 0 0 1024 00000000 0 0 0 Apr 24 23:56:00.787391 waagent[1963]: eth0 0000000A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Apr 24 23:56:00.787391 waagent[1963]: eth0 0100000A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Apr 24 23:56:00.787391 waagent[1963]: eth0 10813FA8 0100000A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 24 23:56:00.787391 waagent[1963]: eth0 FEA9FEA9 0100000A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 24 23:56:00.787679 waagent[1963]: 2026-04-24T23:56:00.787596Z INFO EnvHandler ExtHandler Configure routes Apr 24 23:56:00.787725 waagent[1963]: 2026-04-24T23:56:00.787694Z INFO EnvHandler ExtHandler Gateway:None Apr 24 23:56:00.787816 waagent[1963]: 2026-04-24T23:56:00.787766Z INFO EnvHandler ExtHandler Routes:None Apr 24 23:56:00.788232 waagent[1963]: 2026-04-24T23:56:00.788177Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Apr 24 23:56:00.788557 waagent[1963]: 2026-04-24T23:56:00.788428Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Apr 24 23:56:00.789023 waagent[1963]: 2026-04-24T23:56:00.788959Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Apr 24 23:56:00.789237 waagent[1963]: 2026-04-24T23:56:00.789176Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Apr 24 23:56:00.789428 waagent[1963]: 2026-04-24T23:56:00.789362Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Apr 24 23:56:00.795980 waagent[1963]: 2026-04-24T23:56:00.795936Z INFO ExtHandler ExtHandler Apr 24 23:56:00.796354 waagent[1963]: 2026-04-24T23:56:00.796306Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 1c5e1bbd-0516-482e-99e9-3e15d0aa7a78 correlation 3c5a0aab-7a9d-441e-8362-6752b4123533 created: 2026-04-24T23:54:25.042908Z] Apr 24 23:56:00.796727 waagent[1963]: 2026-04-24T23:56:00.796679Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Apr 24 23:56:00.797236 waagent[1963]: 2026-04-24T23:56:00.797190Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Apr 24 23:56:00.832616 waagent[1963]: 2026-04-24T23:56:00.832454Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 9BE164C2-B806-4DF8-BE7A-CFB3FEB9E857;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Apr 24 23:56:00.841935 waagent[1963]: 2026-04-24T23:56:00.841870Z INFO MonitorHandler ExtHandler Network interfaces: Apr 24 23:56:00.841935 waagent[1963]: Executing ['ip', '-a', '-o', 'link']: Apr 24 23:56:00.841935 waagent[1963]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Apr 24 23:56:00.841935 waagent[1963]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:1f:c7:c2 brd ff:ff:ff:ff:ff:ff Apr 24 23:56:00.841935 waagent[1963]: 3: enP64114s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:1f:c7:c2 brd ff:ff:ff:ff:ff:ff\ altname enP64114p0s2 Apr 24 23:56:00.841935 waagent[1963]: Executing ['ip', '-4', '-a', '-o', 'address']: Apr 24 23:56:00.841935 waagent[1963]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Apr 24 23:56:00.841935 waagent[1963]: 2: eth0 inet 10.0.0.29/24 metric 1024 brd 10.0.0.255 scope global eth0\ valid_lft forever preferred_lft forever Apr 24 23:56:00.841935 waagent[1963]: Executing ['ip', '-6', '-a', '-o', 'address']: Apr 24 23:56:00.841935 waagent[1963]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Apr 24 23:56:00.841935 waagent[1963]: 2: eth0 inet6 fe80::7e1e:52ff:fe1f:c7c2/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Apr 24 23:56:00.915966 waagent[1963]: 2026-04-24T23:56:00.915893Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Apr 24 23:56:00.915966 waagent[1963]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 24 23:56:00.915966 waagent[1963]: pkts bytes target prot opt in out source destination Apr 24 23:56:00.915966 waagent[1963]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 24 23:56:00.915966 waagent[1963]: pkts bytes target prot opt in out source destination Apr 24 23:56:00.915966 waagent[1963]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Apr 24 23:56:00.915966 waagent[1963]: pkts bytes target prot opt in out source destination Apr 24 23:56:00.915966 waagent[1963]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 24 23:56:00.915966 waagent[1963]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 24 23:56:00.915966 waagent[1963]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 24 23:56:00.919373 waagent[1963]: 2026-04-24T23:56:00.919300Z INFO EnvHandler ExtHandler Current Firewall rules: Apr 24 23:56:00.919373 waagent[1963]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 24 23:56:00.919373 waagent[1963]: pkts bytes target prot opt in out source destination Apr 24 23:56:00.919373 waagent[1963]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 24 23:56:00.919373 waagent[1963]: pkts bytes target prot opt in out source destination Apr 24 23:56:00.919373 waagent[1963]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Apr 24 23:56:00.919373 waagent[1963]: pkts bytes target prot opt in out source destination Apr 24 23:56:00.919373 waagent[1963]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 24 23:56:00.919373 waagent[1963]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 24 23:56:00.919373 waagent[1963]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 24 23:56:00.919762 waagent[1963]: 2026-04-24T23:56:00.919658Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Apr 24 23:56:05.834903 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 24 23:56:05.841575 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:56:06.177414 update_engine[1715]: I20260424 23:56:06.176421 1715 update_attempter.cc:509] Updating boot flags... Apr 24 23:56:06.589375 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (2207) Apr 24 23:56:06.621632 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:56:06.631732 (kubelet)[2232]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:56:06.732734 kubelet[2232]: E0424 23:56:06.732679 2232 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:56:06.740226 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (2212) Apr 24 23:56:06.738594 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:56:06.738788 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:56:06.858374 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (2212) Apr 24 23:56:11.559010 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 24 23:56:11.563633 systemd[1]: Started sshd@0-10.0.0.29:22-4.175.71.9:52146.service - OpenSSH per-connection server daemon (4.175.71.9:52146). Apr 24 23:56:11.726930 sshd[2301]: Accepted publickey for core from 4.175.71.9 port 52146 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 24 23:56:11.728527 sshd[2301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:56:11.733918 systemd-logind[1712]: New session 3 of user core. Apr 24 23:56:11.743570 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 24 23:56:11.868675 systemd[1]: Started sshd@1-10.0.0.29:22-4.175.71.9:52156.service - OpenSSH per-connection server daemon (4.175.71.9:52156). Apr 24 23:56:11.982089 sshd[2306]: Accepted publickey for core from 4.175.71.9 port 52156 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 24 23:56:11.983630 sshd[2306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:56:11.988912 systemd-logind[1712]: New session 4 of user core. Apr 24 23:56:11.994531 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 24 23:56:12.090574 sshd[2306]: pam_unix(sshd:session): session closed for user core Apr 24 23:56:12.094175 systemd[1]: sshd@1-10.0.0.29:22-4.175.71.9:52156.service: Deactivated successfully. Apr 24 23:56:12.096037 systemd[1]: session-4.scope: Deactivated successfully. Apr 24 23:56:12.096801 systemd-logind[1712]: Session 4 logged out. Waiting for processes to exit. Apr 24 23:56:12.097784 systemd-logind[1712]: Removed session 4. Apr 24 23:56:12.115067 systemd[1]: Started sshd@2-10.0.0.29:22-4.175.71.9:52162.service - OpenSSH per-connection server daemon (4.175.71.9:52162). Apr 24 23:56:12.227527 sshd[2313]: Accepted publickey for core from 4.175.71.9 port 52162 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 24 23:56:12.228995 sshd[2313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:56:12.234057 systemd-logind[1712]: New session 5 of user core. Apr 24 23:56:12.239760 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 24 23:56:12.329764 sshd[2313]: pam_unix(sshd:session): session closed for user core Apr 24 23:56:12.333001 systemd[1]: sshd@2-10.0.0.29:22-4.175.71.9:52162.service: Deactivated successfully. Apr 24 23:56:12.335059 systemd[1]: session-5.scope: Deactivated successfully. Apr 24 23:56:12.336485 systemd-logind[1712]: Session 5 logged out. Waiting for processes to exit. Apr 24 23:56:12.337694 systemd-logind[1712]: Removed session 5. Apr 24 23:56:12.356029 systemd[1]: Started sshd@3-10.0.0.29:22-4.175.71.9:52174.service - OpenSSH per-connection server daemon (4.175.71.9:52174). Apr 24 23:56:12.468246 sshd[2320]: Accepted publickey for core from 4.175.71.9 port 52174 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 24 23:56:12.469767 sshd[2320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:56:12.474500 systemd-logind[1712]: New session 6 of user core. Apr 24 23:56:12.484507 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 24 23:56:12.579775 sshd[2320]: pam_unix(sshd:session): session closed for user core Apr 24 23:56:12.583590 systemd[1]: sshd@3-10.0.0.29:22-4.175.71.9:52174.service: Deactivated successfully. Apr 24 23:56:12.585433 systemd[1]: session-6.scope: Deactivated successfully. Apr 24 23:56:12.586234 systemd-logind[1712]: Session 6 logged out. Waiting for processes to exit. Apr 24 23:56:12.587237 systemd-logind[1712]: Removed session 6. Apr 24 23:56:12.607936 systemd[1]: Started sshd@4-10.0.0.29:22-4.175.71.9:52180.service - OpenSSH per-connection server daemon (4.175.71.9:52180). Apr 24 23:56:12.734591 sshd[2327]: Accepted publickey for core from 4.175.71.9 port 52180 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 24 23:56:12.736059 sshd[2327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:56:12.740992 systemd-logind[1712]: New session 7 of user core. Apr 24 23:56:12.748508 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 24 23:56:12.947807 sudo[2330]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 24 23:56:12.948188 sudo[2330]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:56:12.973963 sudo[2330]: pam_unix(sudo:session): session closed for user root Apr 24 23:56:12.989630 sshd[2327]: pam_unix(sshd:session): session closed for user core Apr 24 23:56:12.992981 systemd[1]: sshd@4-10.0.0.29:22-4.175.71.9:52180.service: Deactivated successfully. Apr 24 23:56:12.995113 systemd[1]: session-7.scope: Deactivated successfully. Apr 24 23:56:12.996675 systemd-logind[1712]: Session 7 logged out. Waiting for processes to exit. Apr 24 23:56:12.997854 systemd-logind[1712]: Removed session 7. Apr 24 23:56:13.013497 systemd[1]: Started sshd@5-10.0.0.29:22-4.175.71.9:52196.service - OpenSSH per-connection server daemon (4.175.71.9:52196). Apr 24 23:56:13.125696 sshd[2335]: Accepted publickey for core from 4.175.71.9 port 52196 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 24 23:56:13.127188 sshd[2335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:56:13.132551 systemd-logind[1712]: New session 8 of user core. Apr 24 23:56:13.141778 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 24 23:56:13.223390 sudo[2339]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 24 23:56:13.223767 sudo[2339]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:56:13.227173 sudo[2339]: pam_unix(sudo:session): session closed for user root Apr 24 23:56:13.232375 sudo[2338]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 24 23:56:13.232728 sudo[2338]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:56:13.244657 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 24 23:56:13.247887 auditctl[2342]: No rules Apr 24 23:56:13.248260 systemd[1]: audit-rules.service: Deactivated successfully. Apr 24 23:56:13.248491 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 24 23:56:13.251112 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 24 23:56:13.287972 augenrules[2360]: No rules Apr 24 23:56:13.289575 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 24 23:56:13.291297 sudo[2338]: pam_unix(sudo:session): session closed for user root Apr 24 23:56:13.306721 sshd[2335]: pam_unix(sshd:session): session closed for user core Apr 24 23:56:13.309415 systemd[1]: sshd@5-10.0.0.29:22-4.175.71.9:52196.service: Deactivated successfully. Apr 24 23:56:13.311089 systemd[1]: session-8.scope: Deactivated successfully. Apr 24 23:56:13.312705 systemd-logind[1712]: Session 8 logged out. Waiting for processes to exit. Apr 24 23:56:13.313776 systemd-logind[1712]: Removed session 8. Apr 24 23:56:13.331913 systemd[1]: Started sshd@6-10.0.0.29:22-4.175.71.9:52210.service - OpenSSH per-connection server daemon (4.175.71.9:52210). Apr 24 23:56:13.445197 sshd[2368]: Accepted publickey for core from 4.175.71.9 port 52210 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 24 23:56:13.446708 sshd[2368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:56:13.451873 systemd-logind[1712]: New session 9 of user core. Apr 24 23:56:13.457511 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 24 23:56:13.537060 sudo[2371]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 24 23:56:13.537462 sudo[2371]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:56:14.603699 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 24 23:56:14.614747 (dockerd)[2386]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 24 23:56:15.708677 dockerd[2386]: time="2026-04-24T23:56:15.708597877Z" level=info msg="Starting up" Apr 24 23:56:16.096760 dockerd[2386]: time="2026-04-24T23:56:16.096714480Z" level=info msg="Loading containers: start." Apr 24 23:56:16.251369 kernel: Initializing XFRM netlink socket Apr 24 23:56:16.398815 systemd-networkd[1362]: docker0: Link UP Apr 24 23:56:16.421551 dockerd[2386]: time="2026-04-24T23:56:16.421503466Z" level=info msg="Loading containers: done." Apr 24 23:56:16.476649 dockerd[2386]: time="2026-04-24T23:56:16.476595276Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 24 23:56:16.476832 dockerd[2386]: time="2026-04-24T23:56:16.476736178Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 24 23:56:16.476880 dockerd[2386]: time="2026-04-24T23:56:16.476868679Z" level=info msg="Daemon has completed initialization" Apr 24 23:56:16.540516 dockerd[2386]: time="2026-04-24T23:56:16.539818891Z" level=info msg="API listen on /run/docker.sock" Apr 24 23:56:16.540042 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 24 23:56:16.835168 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 24 23:56:16.844628 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:56:17.048377 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:56:17.063733 (kubelet)[2530]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:56:17.100413 kubelet[2530]: E0424 23:56:17.099487 2530 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:56:17.102135 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:56:17.102369 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:56:17.910258 containerd[1723]: time="2026-04-24T23:56:17.910214566Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 24 23:56:18.730621 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3390001658.mount: Deactivated successfully. Apr 24 23:56:20.305011 containerd[1723]: time="2026-04-24T23:56:20.304950238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:56:20.308420 containerd[1723]: time="2026-04-24T23:56:20.307717703Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30193997" Apr 24 23:56:20.311982 containerd[1723]: time="2026-04-24T23:56:20.311827700Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:56:20.316400 containerd[1723]: time="2026-04-24T23:56:20.316334106Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:56:20.320170 containerd[1723]: time="2026-04-24T23:56:20.320119195Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 2.409861528s" Apr 24 23:56:20.320271 containerd[1723]: time="2026-04-24T23:56:20.320180496Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 24 23:56:20.322649 containerd[1723]: time="2026-04-24T23:56:20.322402649Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 24 23:56:21.925397 containerd[1723]: time="2026-04-24T23:56:21.925326031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:56:21.927701 containerd[1723]: time="2026-04-24T23:56:21.927496162Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171455" Apr 24 23:56:21.930469 containerd[1723]: time="2026-04-24T23:56:21.930434205Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:56:21.934827 containerd[1723]: time="2026-04-24T23:56:21.934768067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:56:21.935923 containerd[1723]: time="2026-04-24T23:56:21.935783982Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 1.613344933s" Apr 24 23:56:21.935923 containerd[1723]: time="2026-04-24T23:56:21.935823583Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 24 23:56:21.936665 containerd[1723]: time="2026-04-24T23:56:21.936641094Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 24 23:56:23.295976 containerd[1723]: time="2026-04-24T23:56:23.295912708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:56:23.298403 containerd[1723]: time="2026-04-24T23:56:23.298318643Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289764" Apr 24 23:56:23.301255 containerd[1723]: time="2026-04-24T23:56:23.301202785Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:56:23.305765 containerd[1723]: time="2026-04-24T23:56:23.305676249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:56:23.307135 containerd[1723]: time="2026-04-24T23:56:23.306947568Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 1.370042269s" Apr 24 23:56:23.307135 containerd[1723]: time="2026-04-24T23:56:23.306985868Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 24 23:56:23.307840 containerd[1723]: time="2026-04-24T23:56:23.307665178Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 24 23:56:24.375159 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2526798150.mount: Deactivated successfully. Apr 24 23:56:24.908148 containerd[1723]: time="2026-04-24T23:56:24.908085980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:56:24.910141 containerd[1723]: time="2026-04-24T23:56:24.910093213Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010719" Apr 24 23:56:24.912775 containerd[1723]: time="2026-04-24T23:56:24.912722456Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:56:24.916596 containerd[1723]: time="2026-04-24T23:56:24.916545418Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:56:24.917624 containerd[1723]: time="2026-04-24T23:56:24.917138228Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 1.608967142s" Apr 24 23:56:24.917624 containerd[1723]: time="2026-04-24T23:56:24.917179728Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 24 23:56:24.917921 containerd[1723]: time="2026-04-24T23:56:24.917900040Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 24 23:56:25.523388 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount916186115.mount: Deactivated successfully. Apr 24 23:56:26.760548 containerd[1723]: time="2026-04-24T23:56:26.760482898Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:56:26.762783 containerd[1723]: time="2026-04-24T23:56:26.762704934Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Apr 24 23:56:26.766920 containerd[1723]: time="2026-04-24T23:56:26.766857402Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:56:26.773246 containerd[1723]: time="2026-04-24T23:56:26.771256674Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:56:26.773246 containerd[1723]: time="2026-04-24T23:56:26.772575395Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.854572154s" Apr 24 23:56:26.773246 containerd[1723]: time="2026-04-24T23:56:26.772615896Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 24 23:56:26.773695 containerd[1723]: time="2026-04-24T23:56:26.773671913Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 24 23:56:27.328223 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 24 23:56:27.336875 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:56:27.338924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount319119163.mount: Deactivated successfully. Apr 24 23:56:27.349560 containerd[1723]: time="2026-04-24T23:56:27.348450690Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:56:27.352432 containerd[1723]: time="2026-04-24T23:56:27.352381254Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Apr 24 23:56:27.358001 containerd[1723]: time="2026-04-24T23:56:27.357963645Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:56:27.365956 containerd[1723]: time="2026-04-24T23:56:27.365910174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:56:27.367436 containerd[1723]: time="2026-04-24T23:56:27.366858890Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 593.078075ms" Apr 24 23:56:27.368071 containerd[1723]: time="2026-04-24T23:56:27.368045009Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 24 23:56:27.368979 containerd[1723]: time="2026-04-24T23:56:27.368805822Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 24 23:56:27.453910 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:56:27.467297 (kubelet)[2675]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:56:28.107250 kubelet[2675]: E0424 23:56:28.107150 2675 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:56:28.109930 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:56:28.110137 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:56:29.001373 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4069980793.mount: Deactivated successfully. Apr 24 23:56:30.465984 containerd[1723]: time="2026-04-24T23:56:30.465926945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:56:30.468811 containerd[1723]: time="2026-04-24T23:56:30.468665790Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23719434" Apr 24 23:56:30.471937 containerd[1723]: time="2026-04-24T23:56:30.471883442Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:56:30.477327 containerd[1723]: time="2026-04-24T23:56:30.476971625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:56:30.478438 containerd[1723]: time="2026-04-24T23:56:30.478257546Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 3.109017917s" Apr 24 23:56:30.478438 containerd[1723]: time="2026-04-24T23:56:30.478297947Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 24 23:56:33.705494 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:56:33.711652 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:56:33.747520 systemd[1]: Reloading requested from client PID 2775 ('systemctl') (unit session-9.scope)... Apr 24 23:56:33.747541 systemd[1]: Reloading... Apr 24 23:56:33.857413 zram_generator::config[2815]: No configuration found. Apr 24 23:56:33.994175 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:56:34.078112 systemd[1]: Reloading finished in 330 ms. Apr 24 23:56:34.128120 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 24 23:56:34.128196 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 24 23:56:34.128484 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:56:34.140050 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:56:35.143625 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:56:35.149427 (kubelet)[2882]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 24 23:56:35.195495 kubelet[2882]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:56:35.195495 kubelet[2882]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 24 23:56:35.195495 kubelet[2882]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:56:35.195495 kubelet[2882]: I0424 23:56:35.193694 2882 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 24 23:56:36.382792 kubelet[2882]: I0424 23:56:36.382456 2882 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 24 23:56:36.382792 kubelet[2882]: I0424 23:56:36.382492 2882 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 24 23:56:36.383289 kubelet[2882]: I0424 23:56:36.382852 2882 server.go:956] "Client rotation is on, will bootstrap in background" Apr 24 23:56:36.482473 kubelet[2882]: E0424 23:56:36.482426 2882 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.29:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.29:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 24 23:56:36.513962 kubelet[2882]: I0424 23:56:36.513267 2882 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 23:56:36.521140 kubelet[2882]: E0424 23:56:36.521095 2882 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 24 23:56:36.521140 kubelet[2882]: I0424 23:56:36.521138 2882 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 24 23:56:36.525043 kubelet[2882]: I0424 23:56:36.525018 2882 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 24 23:56:36.525290 kubelet[2882]: I0424 23:56:36.525256 2882 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 24 23:56:36.525494 kubelet[2882]: I0424 23:56:36.525284 2882 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-b07cc1dc35","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 24 23:56:36.525668 kubelet[2882]: I0424 23:56:36.525499 2882 topology_manager.go:138] "Creating topology manager with none policy" Apr 24 23:56:36.525668 kubelet[2882]: I0424 23:56:36.525514 2882 container_manager_linux.go:303] "Creating device plugin manager" Apr 24 23:56:36.525668 kubelet[2882]: I0424 23:56:36.525668 2882 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:56:36.579736 kubelet[2882]: I0424 23:56:36.579679 2882 kubelet.go:480] "Attempting to sync node with API server" Apr 24 23:56:36.579736 kubelet[2882]: I0424 23:56:36.579728 2882 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 24 23:56:36.579942 kubelet[2882]: I0424 23:56:36.579768 2882 kubelet.go:386] "Adding apiserver pod source" Apr 24 23:56:36.627717 kubelet[2882]: I0424 23:56:36.627377 2882 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 24 23:56:36.630654 kubelet[2882]: E0424 23:56:36.630417 2882 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.29:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-b07cc1dc35&limit=500&resourceVersion=0\": dial tcp 10.0.0.29:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 24 23:56:36.630654 kubelet[2882]: E0424 23:56:36.630554 2882 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.29:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.29:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 24 23:56:36.631256 kubelet[2882]: I0424 23:56:36.631225 2882 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 24 23:56:36.631740 kubelet[2882]: I0424 23:56:36.631713 2882 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 24 23:56:36.633403 kubelet[2882]: W0424 23:56:36.632554 2882 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 24 23:56:36.677580 kubelet[2882]: I0424 23:56:36.677220 2882 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 24 23:56:36.677580 kubelet[2882]: I0424 23:56:36.677285 2882 server.go:1289] "Started kubelet" Apr 24 23:56:36.767077 kubelet[2882]: I0424 23:56:36.766991 2882 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 24 23:56:36.767983 kubelet[2882]: I0424 23:56:36.767952 2882 server.go:317] "Adding debug handlers to kubelet server" Apr 24 23:56:36.773253 kubelet[2882]: I0424 23:56:36.772839 2882 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 24 23:56:36.778134 kubelet[2882]: I0424 23:56:36.778095 2882 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 24 23:56:36.781155 kubelet[2882]: I0424 23:56:36.781124 2882 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 24 23:56:36.782263 kubelet[2882]: I0424 23:56:36.782242 2882 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 24 23:56:36.782520 kubelet[2882]: E0424 23:56:36.782496 2882 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-b07cc1dc35\" not found" Apr 24 23:56:36.787895 kubelet[2882]: I0424 23:56:36.787867 2882 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 24 23:56:36.787969 kubelet[2882]: I0424 23:56:36.787923 2882 reconciler.go:26] "Reconciler: start to sync state" Apr 24 23:56:36.802659 kubelet[2882]: E0424 23:56:36.802616 2882 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-b07cc1dc35?timeout=10s\": dial tcp 10.0.0.29:6443: connect: connection refused" interval="200ms" Apr 24 23:56:36.818723 kubelet[2882]: E0424 23:56:36.818328 2882 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.29:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.29:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 24 23:56:36.818723 kubelet[2882]: I0424 23:56:36.818525 2882 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 24 23:56:36.818951 kubelet[2882]: I0424 23:56:36.818930 2882 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 24 23:56:36.965750 kubelet[2882]: E0424 23:56:36.819415 2882 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.29:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.29:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-b07cc1dc35.18a9704b4c2d0856 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-b07cc1dc35,UID:ci-4081.3.6-n-b07cc1dc35,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-b07cc1dc35,},FirstTimestamp:2026-04-24 23:56:36.67724911 +0000 UTC m=+1.523748205,LastTimestamp:2026-04-24 23:56:36.67724911 +0000 UTC m=+1.523748205,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-b07cc1dc35,}" Apr 24 23:56:36.965750 kubelet[2882]: I0424 23:56:36.965105 2882 factory.go:223] Registration of the systemd container factory successfully Apr 24 23:56:36.965750 kubelet[2882]: I0424 23:56:36.965225 2882 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 24 23:56:36.968324 kubelet[2882]: E0424 23:56:36.966717 2882 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-b07cc1dc35\" not found" Apr 24 23:56:36.968798 kubelet[2882]: E0424 23:56:36.968781 2882 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 24 23:56:36.970943 kubelet[2882]: I0424 23:56:36.970501 2882 factory.go:223] Registration of the containerd container factory successfully Apr 24 23:56:36.998890 kubelet[2882]: I0424 23:56:36.998866 2882 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 24 23:56:36.998999 kubelet[2882]: I0424 23:56:36.998904 2882 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 24 23:56:36.998999 kubelet[2882]: I0424 23:56:36.998925 2882 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:56:37.003232 kubelet[2882]: E0424 23:56:37.003195 2882 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-b07cc1dc35?timeout=10s\": dial tcp 10.0.0.29:6443: connect: connection refused" interval="400ms" Apr 24 23:56:37.067620 kubelet[2882]: E0424 23:56:37.067570 2882 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-b07cc1dc35\" not found" Apr 24 23:56:37.167953 kubelet[2882]: E0424 23:56:37.167882 2882 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-b07cc1dc35\" not found" Apr 24 23:56:37.183479 kubelet[2882]: I0424 23:56:37.183440 2882 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 24 23:56:37.183479 kubelet[2882]: I0424 23:56:37.183476 2882 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 24 23:56:37.183642 kubelet[2882]: I0424 23:56:37.183516 2882 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 24 23:56:37.183642 kubelet[2882]: I0424 23:56:37.183526 2882 kubelet.go:2436] "Starting kubelet main sync loop" Apr 24 23:56:37.183642 kubelet[2882]: E0424 23:56:37.183575 2882 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 24 23:56:37.268533 kubelet[2882]: E0424 23:56:37.268234 2882 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-b07cc1dc35\" not found" Apr 24 23:56:37.272899 kubelet[2882]: E0424 23:56:37.272856 2882 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.29:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.29:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 24 23:56:37.284262 kubelet[2882]: E0424 23:56:37.284229 2882 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 24 23:56:37.320734 kubelet[2882]: I0424 23:56:37.320704 2882 policy_none.go:49] "None policy: Start" Apr 24 23:56:37.320734 kubelet[2882]: I0424 23:56:37.320739 2882 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 24 23:56:37.320886 kubelet[2882]: I0424 23:56:37.320758 2882 state_mem.go:35] "Initializing new in-memory state store" Apr 24 23:56:37.368928 kubelet[2882]: E0424 23:56:37.368684 2882 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-b07cc1dc35\" not found" Apr 24 23:56:37.373397 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 24 23:56:37.382585 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 24 23:56:37.386393 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 24 23:56:37.394030 kubelet[2882]: E0424 23:56:37.393998 2882 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 24 23:56:37.394339 kubelet[2882]: I0424 23:56:37.394239 2882 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 24 23:56:37.394339 kubelet[2882]: I0424 23:56:37.394254 2882 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 24 23:56:37.395085 kubelet[2882]: I0424 23:56:37.395046 2882 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 24 23:56:37.397185 kubelet[2882]: E0424 23:56:37.397160 2882 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 24 23:56:37.397397 kubelet[2882]: E0424 23:56:37.397377 2882 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-b07cc1dc35\" not found" Apr 24 23:56:37.403843 kubelet[2882]: E0424 23:56:37.403811 2882 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-b07cc1dc35?timeout=10s\": dial tcp 10.0.0.29:6443: connect: connection refused" interval="800ms" Apr 24 23:56:37.496244 kubelet[2882]: I0424 23:56:37.496211 2882 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:37.496579 kubelet[2882]: E0424 23:56:37.496549 2882 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.29:6443/api/v1/nodes\": dial tcp 10.0.0.29:6443: connect: connection refused" node="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:37.669423 kubelet[2882]: I0424 23:56:37.567973 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/87b36bf70ec14354481b8a06ea337473-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-b07cc1dc35\" (UID: \"87b36bf70ec14354481b8a06ea337473\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:37.669423 kubelet[2882]: I0424 23:56:37.568037 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/87b36bf70ec14354481b8a06ea337473-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-b07cc1dc35\" (UID: \"87b36bf70ec14354481b8a06ea337473\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:37.669423 kubelet[2882]: I0424 23:56:37.568069 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/87b36bf70ec14354481b8a06ea337473-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-b07cc1dc35\" (UID: \"87b36bf70ec14354481b8a06ea337473\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:37.698763 kubelet[2882]: I0424 23:56:37.698726 2882 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:37.717540 kubelet[2882]: E0424 23:56:37.699054 2882 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.29:6443/api/v1/nodes\": dial tcp 10.0.0.29:6443: connect: connection refused" node="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:37.774563 systemd[1]: Created slice kubepods-burstable-pod87b36bf70ec14354481b8a06ea337473.slice - libcontainer container kubepods-burstable-pod87b36bf70ec14354481b8a06ea337473.slice. Apr 24 23:56:37.780987 kubelet[2882]: E0424 23:56:37.780951 2882 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-b07cc1dc35\" not found" node="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:37.781969 containerd[1723]: time="2026-04-24T23:56:37.781920714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-b07cc1dc35,Uid:87b36bf70ec14354481b8a06ea337473,Namespace:kube-system,Attempt:0,}" Apr 24 23:56:37.829030 kubelet[2882]: E0424 23:56:37.828979 2882 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.29:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-b07cc1dc35&limit=500&resourceVersion=0\": dial tcp 10.0.0.29:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 24 23:56:37.870523 kubelet[2882]: I0424 23:56:37.870436 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/733287ecb988aba42dd804ea71e92796-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-b07cc1dc35\" (UID: \"733287ecb988aba42dd804ea71e92796\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:37.870523 kubelet[2882]: I0424 23:56:37.870486 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/733287ecb988aba42dd804ea71e92796-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-b07cc1dc35\" (UID: \"733287ecb988aba42dd804ea71e92796\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:37.870726 kubelet[2882]: I0424 23:56:37.870534 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/733287ecb988aba42dd804ea71e92796-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-b07cc1dc35\" (UID: \"733287ecb988aba42dd804ea71e92796\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:37.870726 kubelet[2882]: I0424 23:56:37.870570 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/733287ecb988aba42dd804ea71e92796-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-b07cc1dc35\" (UID: \"733287ecb988aba42dd804ea71e92796\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:37.870726 kubelet[2882]: I0424 23:56:37.870596 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/733287ecb988aba42dd804ea71e92796-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-b07cc1dc35\" (UID: \"733287ecb988aba42dd804ea71e92796\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:37.934831 systemd[1]: Created slice kubepods-burstable-pod733287ecb988aba42dd804ea71e92796.slice - libcontainer container kubepods-burstable-pod733287ecb988aba42dd804ea71e92796.slice. Apr 24 23:56:37.937989 kubelet[2882]: E0424 23:56:37.937789 2882 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-b07cc1dc35\" not found" node="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:37.950836 systemd[1]: Created slice kubepods-burstable-pod6260f243ec0e4a9d2e2d4e3e538adbec.slice - libcontainer container kubepods-burstable-pod6260f243ec0e4a9d2e2d4e3e538adbec.slice. Apr 24 23:56:37.952633 kubelet[2882]: E0424 23:56:37.952606 2882 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-b07cc1dc35\" not found" node="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:37.968130 kubelet[2882]: E0424 23:56:37.968095 2882 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.29:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.29:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 24 23:56:38.072492 kubelet[2882]: I0424 23:56:38.072440 2882 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6260f243ec0e4a9d2e2d4e3e538adbec-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-b07cc1dc35\" (UID: \"6260f243ec0e4a9d2e2d4e3e538adbec\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:38.101993 kubelet[2882]: I0424 23:56:38.101953 2882 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:38.102329 kubelet[2882]: E0424 23:56:38.102298 2882 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.29:6443/api/v1/nodes\": dial tcp 10.0.0.29:6443: connect: connection refused" node="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:38.161464 kubelet[2882]: E0424 23:56:38.161412 2882 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.29:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.29:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 24 23:56:38.204707 kubelet[2882]: E0424 23:56:38.204583 2882 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-b07cc1dc35?timeout=10s\": dial tcp 10.0.0.29:6443: connect: connection refused" interval="1.6s" Apr 24 23:56:38.239643 containerd[1723]: time="2026-04-24T23:56:38.239595193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-b07cc1dc35,Uid:733287ecb988aba42dd804ea71e92796,Namespace:kube-system,Attempt:0,}" Apr 24 23:56:38.255292 containerd[1723]: time="2026-04-24T23:56:38.255026125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-b07cc1dc35,Uid:6260f243ec0e4a9d2e2d4e3e538adbec,Namespace:kube-system,Attempt:0,}" Apr 24 23:56:38.533917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3769124136.mount: Deactivated successfully. Apr 24 23:56:38.548225 kubelet[2882]: E0424 23:56:38.548179 2882 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.29:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.29:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 24 23:56:38.551202 containerd[1723]: time="2026-04-24T23:56:38.551151376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:56:38.557454 containerd[1723]: time="2026-04-24T23:56:38.557405270Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 24 23:56:38.560457 containerd[1723]: time="2026-04-24T23:56:38.560315314Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:56:38.562846 containerd[1723]: time="2026-04-24T23:56:38.562809851Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:56:38.565991 containerd[1723]: time="2026-04-24T23:56:38.565940798Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:56:38.568627 containerd[1723]: time="2026-04-24T23:56:38.568584438Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 24 23:56:38.571046 containerd[1723]: time="2026-04-24T23:56:38.570979974Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Apr 24 23:56:38.574831 containerd[1723]: time="2026-04-24T23:56:38.574777331Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:56:38.576786 containerd[1723]: time="2026-04-24T23:56:38.575617044Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 320.508518ms" Apr 24 23:56:38.577466 containerd[1723]: time="2026-04-24T23:56:38.577258868Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 795.250453ms" Apr 24 23:56:38.577900 containerd[1723]: time="2026-04-24T23:56:38.577870378Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 338.177284ms" Apr 24 23:56:38.759239 kubelet[2882]: E0424 23:56:38.759180 2882 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.29:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.29:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 24 23:56:38.904622 kubelet[2882]: I0424 23:56:38.904585 2882 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:38.905009 kubelet[2882]: E0424 23:56:38.904971 2882 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.29:6443/api/v1/nodes\": dial tcp 10.0.0.29:6443: connect: connection refused" node="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:39.184968 containerd[1723]: time="2026-04-24T23:56:39.184723299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:56:39.186796 containerd[1723]: time="2026-04-24T23:56:39.186569727Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:56:39.186796 containerd[1723]: time="2026-04-24T23:56:39.186613128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:56:39.186796 containerd[1723]: time="2026-04-24T23:56:39.186720529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:56:39.188264 containerd[1723]: time="2026-04-24T23:56:39.188126950Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:56:39.189365 containerd[1723]: time="2026-04-24T23:56:39.189008664Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:56:39.189554 containerd[1723]: time="2026-04-24T23:56:39.189481771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:56:39.189910 containerd[1723]: time="2026-04-24T23:56:39.189837076Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:56:39.189981 containerd[1723]: time="2026-04-24T23:56:39.189941778Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:56:39.190031 containerd[1723]: time="2026-04-24T23:56:39.189983478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:56:39.190230 containerd[1723]: time="2026-04-24T23:56:39.190138081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:56:39.190699 containerd[1723]: time="2026-04-24T23:56:39.190613388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:56:39.238979 systemd[1]: Started cri-containerd-1d34ae2136f1f8e542489a2ef7a46f74f65e9c9ede3914ba833554fffd8ddf31.scope - libcontainer container 1d34ae2136f1f8e542489a2ef7a46f74f65e9c9ede3914ba833554fffd8ddf31. Apr 24 23:56:39.246223 systemd[1]: Started cri-containerd-6fbe42845106ab54c1749783ce985f733d13e6d05a4e2917977efb80b1336b12.scope - libcontainer container 6fbe42845106ab54c1749783ce985f733d13e6d05a4e2917977efb80b1336b12. Apr 24 23:56:39.251237 systemd[1]: Started cri-containerd-4b71b6ff66ed802d497b8328cf25478b79f8c8cf07e90536bb3e5ba74a9a894a.scope - libcontainer container 4b71b6ff66ed802d497b8328cf25478b79f8c8cf07e90536bb3e5ba74a9a894a. Apr 24 23:56:39.332819 containerd[1723]: time="2026-04-24T23:56:39.332774125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-b07cc1dc35,Uid:6260f243ec0e4a9d2e2d4e3e538adbec,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d34ae2136f1f8e542489a2ef7a46f74f65e9c9ede3914ba833554fffd8ddf31\"" Apr 24 23:56:39.344511 containerd[1723]: time="2026-04-24T23:56:39.344466000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-b07cc1dc35,Uid:87b36bf70ec14354481b8a06ea337473,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b71b6ff66ed802d497b8328cf25478b79f8c8cf07e90536bb3e5ba74a9a894a\"" Apr 24 23:56:39.350671 containerd[1723]: time="2026-04-24T23:56:39.350486291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-b07cc1dc35,Uid:733287ecb988aba42dd804ea71e92796,Namespace:kube-system,Attempt:0,} returns sandbox id \"6fbe42845106ab54c1749783ce985f733d13e6d05a4e2917977efb80b1336b12\"" Apr 24 23:56:39.352511 containerd[1723]: time="2026-04-24T23:56:39.352290618Z" level=info msg="CreateContainer within sandbox \"1d34ae2136f1f8e542489a2ef7a46f74f65e9c9ede3914ba833554fffd8ddf31\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 24 23:56:39.358675 containerd[1723]: time="2026-04-24T23:56:39.358641213Z" level=info msg="CreateContainer within sandbox \"4b71b6ff66ed802d497b8328cf25478b79f8c8cf07e90536bb3e5ba74a9a894a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 24 23:56:39.366600 containerd[1723]: time="2026-04-24T23:56:39.366496231Z" level=info msg="CreateContainer within sandbox \"6fbe42845106ab54c1749783ce985f733d13e6d05a4e2917977efb80b1336b12\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 24 23:56:39.406896 containerd[1723]: time="2026-04-24T23:56:39.406726236Z" level=info msg="CreateContainer within sandbox \"1d34ae2136f1f8e542489a2ef7a46f74f65e9c9ede3914ba833554fffd8ddf31\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7c86a638fbbc8fa901b1e76206775d5b87184a727b58501fe1bf419c1886577f\"" Apr 24 23:56:39.407596 containerd[1723]: time="2026-04-24T23:56:39.407560549Z" level=info msg="StartContainer for \"7c86a638fbbc8fa901b1e76206775d5b87184a727b58501fe1bf419c1886577f\"" Apr 24 23:56:39.419866 containerd[1723]: time="2026-04-24T23:56:39.419746932Z" level=info msg="CreateContainer within sandbox \"4b71b6ff66ed802d497b8328cf25478b79f8c8cf07e90536bb3e5ba74a9a894a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"26b453e5eb74a726c76a7f56d026e492f06874187ffd9db9375e3a88303e89fc\"" Apr 24 23:56:39.421898 containerd[1723]: time="2026-04-24T23:56:39.420476143Z" level=info msg="StartContainer for \"26b453e5eb74a726c76a7f56d026e492f06874187ffd9db9375e3a88303e89fc\"" Apr 24 23:56:39.429490 containerd[1723]: time="2026-04-24T23:56:39.429448878Z" level=info msg="CreateContainer within sandbox \"6fbe42845106ab54c1749783ce985f733d13e6d05a4e2917977efb80b1336b12\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ad6de76bf6951e8f4b87f06f9d769efa0ec4f60a3f1c24b177363e57ea0fbd74\"" Apr 24 23:56:39.430437 containerd[1723]: time="2026-04-24T23:56:39.430408092Z" level=info msg="StartContainer for \"ad6de76bf6951e8f4b87f06f9d769efa0ec4f60a3f1c24b177363e57ea0fbd74\"" Apr 24 23:56:39.443628 systemd[1]: Started cri-containerd-7c86a638fbbc8fa901b1e76206775d5b87184a727b58501fe1bf419c1886577f.scope - libcontainer container 7c86a638fbbc8fa901b1e76206775d5b87184a727b58501fe1bf419c1886577f. Apr 24 23:56:39.488637 systemd[1]: Started cri-containerd-26b453e5eb74a726c76a7f56d026e492f06874187ffd9db9375e3a88303e89fc.scope - libcontainer container 26b453e5eb74a726c76a7f56d026e492f06874187ffd9db9375e3a88303e89fc. Apr 24 23:56:39.498712 systemd[1]: Started cri-containerd-ad6de76bf6951e8f4b87f06f9d769efa0ec4f60a3f1c24b177363e57ea0fbd74.scope - libcontainer container ad6de76bf6951e8f4b87f06f9d769efa0ec4f60a3f1c24b177363e57ea0fbd74. Apr 24 23:56:39.547173 containerd[1723]: time="2026-04-24T23:56:39.546949444Z" level=info msg="StartContainer for \"7c86a638fbbc8fa901b1e76206775d5b87184a727b58501fe1bf419c1886577f\" returns successfully" Apr 24 23:56:39.589791 containerd[1723]: time="2026-04-24T23:56:39.589423682Z" level=info msg="StartContainer for \"26b453e5eb74a726c76a7f56d026e492f06874187ffd9db9375e3a88303e89fc\" returns successfully" Apr 24 23:56:39.627032 containerd[1723]: time="2026-04-24T23:56:39.626966747Z" level=info msg="StartContainer for \"ad6de76bf6951e8f4b87f06f9d769efa0ec4f60a3f1c24b177363e57ea0fbd74\" returns successfully" Apr 24 23:56:40.203965 kubelet[2882]: E0424 23:56:40.203631 2882 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-b07cc1dc35\" not found" node="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:40.207178 kubelet[2882]: E0424 23:56:40.206543 2882 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-b07cc1dc35\" not found" node="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:40.207178 kubelet[2882]: E0424 23:56:40.206988 2882 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-b07cc1dc35\" not found" node="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:40.508209 kubelet[2882]: I0424 23:56:40.507878 2882 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:41.207165 kubelet[2882]: E0424 23:56:41.206853 2882 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-b07cc1dc35\" not found" node="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:41.208883 kubelet[2882]: E0424 23:56:41.208688 2882 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-b07cc1dc35\" not found" node="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:41.416334 kubelet[2882]: E0424 23:56:41.416278 2882 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-b07cc1dc35\" not found" node="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:41.437959 kubelet[2882]: I0424 23:56:41.437856 2882 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:41.437959 kubelet[2882]: E0424 23:56:41.437916 2882 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081.3.6-n-b07cc1dc35\": node \"ci-4081.3.6-n-b07cc1dc35\" not found" Apr 24 23:56:41.484707 kubelet[2882]: I0424 23:56:41.483572 2882 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:41.500713 kubelet[2882]: E0424 23:56:41.500466 2882 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-b07cc1dc35\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:41.500713 kubelet[2882]: I0424 23:56:41.500506 2882 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:41.502769 kubelet[2882]: E0424 23:56:41.502540 2882 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-b07cc1dc35\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:41.502769 kubelet[2882]: I0424 23:56:41.502565 2882 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:41.504329 kubelet[2882]: E0424 23:56:41.504298 2882 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-b07cc1dc35\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:41.634723 kubelet[2882]: I0424 23:56:41.634665 2882 apiserver.go:52] "Watching apiserver" Apr 24 23:56:41.688630 kubelet[2882]: I0424 23:56:41.688574 2882 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 24 23:56:42.207622 kubelet[2882]: I0424 23:56:42.207140 2882 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:42.209930 kubelet[2882]: E0424 23:56:42.209895 2882 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-b07cc1dc35\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:43.405491 systemd[1]: Reloading requested from client PID 3160 ('systemctl') (unit session-9.scope)... Apr 24 23:56:43.405508 systemd[1]: Reloading... Apr 24 23:56:43.506367 zram_generator::config[3200]: No configuration found. Apr 24 23:56:43.644562 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:56:43.738064 systemd[1]: Reloading finished in 332 ms. Apr 24 23:56:43.783969 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:56:43.792633 systemd[1]: kubelet.service: Deactivated successfully. Apr 24 23:56:43.792967 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:56:43.793033 systemd[1]: kubelet.service: Consumed 1.005s CPU time, 131.4M memory peak, 0B memory swap peak. Apr 24 23:56:43.797728 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:56:44.018145 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:56:44.031941 (kubelet)[3267]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 24 23:56:44.072969 kubelet[3267]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:56:44.073722 kubelet[3267]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 24 23:56:44.073722 kubelet[3267]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:56:44.073722 kubelet[3267]: I0424 23:56:44.073280 3267 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 24 23:56:44.079071 kubelet[3267]: I0424 23:56:44.079039 3267 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 24 23:56:44.079071 kubelet[3267]: I0424 23:56:44.079064 3267 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 24 23:56:44.079324 kubelet[3267]: I0424 23:56:44.079300 3267 server.go:956] "Client rotation is on, will bootstrap in background" Apr 24 23:56:44.080550 kubelet[3267]: I0424 23:56:44.080526 3267 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 24 23:56:44.083053 kubelet[3267]: I0424 23:56:44.083024 3267 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 23:56:44.089678 kubelet[3267]: E0424 23:56:44.089636 3267 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 24 23:56:44.089678 kubelet[3267]: I0424 23:56:44.089671 3267 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 24 23:56:44.093254 kubelet[3267]: I0424 23:56:44.093225 3267 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 24 23:56:44.093497 kubelet[3267]: I0424 23:56:44.093453 3267 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 24 23:56:44.093653 kubelet[3267]: I0424 23:56:44.093494 3267 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-b07cc1dc35","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 24 23:56:44.093785 kubelet[3267]: I0424 23:56:44.093662 3267 topology_manager.go:138] "Creating topology manager with none policy" Apr 24 23:56:44.093785 kubelet[3267]: I0424 23:56:44.093677 3267 container_manager_linux.go:303] "Creating device plugin manager" Apr 24 23:56:44.093785 kubelet[3267]: I0424 23:56:44.093733 3267 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:56:44.093921 kubelet[3267]: I0424 23:56:44.093910 3267 kubelet.go:480] "Attempting to sync node with API server" Apr 24 23:56:44.094602 kubelet[3267]: I0424 23:56:44.093929 3267 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 24 23:56:44.094602 kubelet[3267]: I0424 23:56:44.093962 3267 kubelet.go:386] "Adding apiserver pod source" Apr 24 23:56:44.094602 kubelet[3267]: I0424 23:56:44.093980 3267 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 24 23:56:44.095431 kubelet[3267]: I0424 23:56:44.095415 3267 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 24 23:56:44.096115 kubelet[3267]: I0424 23:56:44.096096 3267 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 24 23:56:44.100327 kubelet[3267]: I0424 23:56:44.100313 3267 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 24 23:56:44.100466 kubelet[3267]: I0424 23:56:44.100455 3267 server.go:1289] "Started kubelet" Apr 24 23:56:44.102511 kubelet[3267]: I0424 23:56:44.102498 3267 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 24 23:56:44.116436 kubelet[3267]: I0424 23:56:44.116403 3267 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 24 23:56:44.118317 kubelet[3267]: I0424 23:56:44.118301 3267 server.go:317] "Adding debug handlers to kubelet server" Apr 24 23:56:44.123382 kubelet[3267]: I0424 23:56:44.122492 3267 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 24 23:56:44.123382 kubelet[3267]: I0424 23:56:44.122730 3267 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 24 23:56:44.123382 kubelet[3267]: I0424 23:56:44.122976 3267 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 24 23:56:44.124948 kubelet[3267]: I0424 23:56:44.124929 3267 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 24 23:56:44.125245 kubelet[3267]: E0424 23:56:44.125229 3267 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-b07cc1dc35\" not found" Apr 24 23:56:44.127315 kubelet[3267]: I0424 23:56:44.127296 3267 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 24 23:56:44.127569 kubelet[3267]: I0424 23:56:44.127554 3267 reconciler.go:26] "Reconciler: start to sync state" Apr 24 23:56:44.134206 kubelet[3267]: I0424 23:56:44.134044 3267 factory.go:223] Registration of the systemd container factory successfully Apr 24 23:56:44.135143 kubelet[3267]: I0424 23:56:44.135101 3267 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 24 23:56:44.135445 kubelet[3267]: I0424 23:56:44.134508 3267 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 24 23:56:44.137002 kubelet[3267]: I0424 23:56:44.136982 3267 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 24 23:56:44.137104 kubelet[3267]: I0424 23:56:44.137093 3267 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 24 23:56:44.137196 kubelet[3267]: I0424 23:56:44.137185 3267 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 24 23:56:44.137262 kubelet[3267]: I0424 23:56:44.137254 3267 kubelet.go:2436] "Starting kubelet main sync loop" Apr 24 23:56:44.137409 kubelet[3267]: E0424 23:56:44.137391 3267 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 24 23:56:44.137493 kubelet[3267]: E0424 23:56:44.137254 3267 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 24 23:56:44.146873 kubelet[3267]: I0424 23:56:44.146808 3267 factory.go:223] Registration of the containerd container factory successfully Apr 24 23:56:44.187570 kubelet[3267]: I0424 23:56:44.187529 3267 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 24 23:56:44.187570 kubelet[3267]: I0424 23:56:44.187577 3267 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 24 23:56:44.187774 kubelet[3267]: I0424 23:56:44.187600 3267 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:56:44.187774 kubelet[3267]: I0424 23:56:44.187768 3267 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 24 23:56:44.187867 kubelet[3267]: I0424 23:56:44.187782 3267 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 24 23:56:44.187867 kubelet[3267]: I0424 23:56:44.187810 3267 policy_none.go:49] "None policy: Start" Apr 24 23:56:44.187867 kubelet[3267]: I0424 23:56:44.187824 3267 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 24 23:56:44.187867 kubelet[3267]: I0424 23:56:44.187836 3267 state_mem.go:35] "Initializing new in-memory state store" Apr 24 23:56:44.188010 kubelet[3267]: I0424 23:56:44.187958 3267 state_mem.go:75] "Updated machine memory state" Apr 24 23:56:44.192436 kubelet[3267]: E0424 23:56:44.192034 3267 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 24 23:56:44.192436 kubelet[3267]: I0424 23:56:44.192219 3267 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 24 23:56:44.192436 kubelet[3267]: I0424 23:56:44.192234 3267 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 24 23:56:44.192845 kubelet[3267]: I0424 23:56:44.192830 3267 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 24 23:56:44.195566 kubelet[3267]: E0424 23:56:44.194681 3267 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 24 23:56:44.238722 kubelet[3267]: I0424 23:56:44.238680 3267 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:44.239265 kubelet[3267]: I0424 23:56:44.238701 3267 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:44.239265 kubelet[3267]: I0424 23:56:44.238832 3267 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:44.247202 kubelet[3267]: I0424 23:56:44.247153 3267 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 24 23:56:44.251286 kubelet[3267]: I0424 23:56:44.251122 3267 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 24 23:56:44.251286 kubelet[3267]: I0424 23:56:44.251150 3267 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 24 23:56:44.295676 kubelet[3267]: I0424 23:56:44.295445 3267 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:44.308202 kubelet[3267]: I0424 23:56:44.308166 3267 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:44.308330 kubelet[3267]: I0424 23:56:44.308241 3267 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:44.328577 kubelet[3267]: I0424 23:56:44.328498 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/733287ecb988aba42dd804ea71e92796-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-b07cc1dc35\" (UID: \"733287ecb988aba42dd804ea71e92796\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:44.429102 kubelet[3267]: I0424 23:56:44.428988 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/733287ecb988aba42dd804ea71e92796-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-b07cc1dc35\" (UID: \"733287ecb988aba42dd804ea71e92796\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:44.429102 kubelet[3267]: I0424 23:56:44.429029 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/733287ecb988aba42dd804ea71e92796-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-b07cc1dc35\" (UID: \"733287ecb988aba42dd804ea71e92796\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:44.429102 kubelet[3267]: I0424 23:56:44.429052 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/733287ecb988aba42dd804ea71e92796-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-b07cc1dc35\" (UID: \"733287ecb988aba42dd804ea71e92796\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:44.429102 kubelet[3267]: I0424 23:56:44.429076 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/87b36bf70ec14354481b8a06ea337473-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-b07cc1dc35\" (UID: \"87b36bf70ec14354481b8a06ea337473\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:44.429102 kubelet[3267]: I0424 23:56:44.429100 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/733287ecb988aba42dd804ea71e92796-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-b07cc1dc35\" (UID: \"733287ecb988aba42dd804ea71e92796\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:44.429422 kubelet[3267]: I0424 23:56:44.429148 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6260f243ec0e4a9d2e2d4e3e538adbec-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-b07cc1dc35\" (UID: \"6260f243ec0e4a9d2e2d4e3e538adbec\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:44.429422 kubelet[3267]: I0424 23:56:44.429172 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/87b36bf70ec14354481b8a06ea337473-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-b07cc1dc35\" (UID: \"87b36bf70ec14354481b8a06ea337473\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:44.429422 kubelet[3267]: I0424 23:56:44.429211 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/87b36bf70ec14354481b8a06ea337473-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-b07cc1dc35\" (UID: \"87b36bf70ec14354481b8a06ea337473\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:45.095801 kubelet[3267]: I0424 23:56:45.095737 3267 apiserver.go:52] "Watching apiserver" Apr 24 23:56:45.128053 kubelet[3267]: I0424 23:56:45.128009 3267 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 24 23:56:45.167604 kubelet[3267]: I0424 23:56:45.167037 3267 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:45.169372 kubelet[3267]: I0424 23:56:45.169065 3267 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:45.188320 kubelet[3267]: I0424 23:56:45.188289 3267 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 24 23:56:45.188458 kubelet[3267]: E0424 23:56:45.188373 3267 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-b07cc1dc35\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:45.195297 kubelet[3267]: I0424 23:56:45.194768 3267 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 24 23:56:45.195297 kubelet[3267]: E0424 23:56:45.194820 3267 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-b07cc1dc35\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-b07cc1dc35" Apr 24 23:56:45.220879 kubelet[3267]: I0424 23:56:45.220624 3267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-b07cc1dc35" podStartSLOduration=1.22060699 podStartE2EDuration="1.22060699s" podCreationTimestamp="2026-04-24 23:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:56:45.220335586 +0000 UTC m=+1.181763109" watchObservedRunningTime="2026-04-24 23:56:45.22060699 +0000 UTC m=+1.182034613" Apr 24 23:56:45.220879 kubelet[3267]: I0424 23:56:45.220742 3267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-b07cc1dc35" podStartSLOduration=1.220737792 podStartE2EDuration="1.220737792s" podCreationTimestamp="2026-04-24 23:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:56:45.195407448 +0000 UTC m=+1.156834971" watchObservedRunningTime="2026-04-24 23:56:45.220737792 +0000 UTC m=+1.182165315" Apr 24 23:56:45.241624 kubelet[3267]: I0424 23:56:45.241197 3267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-b07cc1dc35" podStartSLOduration=1.241183169 podStartE2EDuration="1.241183169s" podCreationTimestamp="2026-04-24 23:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:56:45.231066132 +0000 UTC m=+1.192493655" watchObservedRunningTime="2026-04-24 23:56:45.241183169 +0000 UTC m=+1.202610692" Apr 24 23:56:50.622649 kubelet[3267]: I0424 23:56:50.622604 3267 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 24 23:56:50.623184 containerd[1723]: time="2026-04-24T23:56:50.623052660Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 24 23:56:50.623547 kubelet[3267]: I0424 23:56:50.623263 3267 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 24 23:56:51.506480 systemd[1]: Created slice kubepods-besteffort-pod5d33530d_69e5_4b9a_82ba_1c6091df548a.slice - libcontainer container kubepods-besteffort-pod5d33530d_69e5_4b9a_82ba_1c6091df548a.slice. Apr 24 23:56:51.579537 kubelet[3267]: I0424 23:56:51.579126 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d994w\" (UniqueName: \"kubernetes.io/projected/5d33530d-69e5-4b9a-82ba-1c6091df548a-kube-api-access-d994w\") pod \"kube-proxy-5ctn6\" (UID: \"5d33530d-69e5-4b9a-82ba-1c6091df548a\") " pod="kube-system/kube-proxy-5ctn6" Apr 24 23:56:51.579537 kubelet[3267]: I0424 23:56:51.579209 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5d33530d-69e5-4b9a-82ba-1c6091df548a-kube-proxy\") pod \"kube-proxy-5ctn6\" (UID: \"5d33530d-69e5-4b9a-82ba-1c6091df548a\") " pod="kube-system/kube-proxy-5ctn6" Apr 24 23:56:51.579537 kubelet[3267]: I0424 23:56:51.579236 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d33530d-69e5-4b9a-82ba-1c6091df548a-xtables-lock\") pod \"kube-proxy-5ctn6\" (UID: \"5d33530d-69e5-4b9a-82ba-1c6091df548a\") " pod="kube-system/kube-proxy-5ctn6" Apr 24 23:56:51.579537 kubelet[3267]: I0424 23:56:51.579259 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d33530d-69e5-4b9a-82ba-1c6091df548a-lib-modules\") pod \"kube-proxy-5ctn6\" (UID: \"5d33530d-69e5-4b9a-82ba-1c6091df548a\") " pod="kube-system/kube-proxy-5ctn6" Apr 24 23:56:51.802183 systemd[1]: Created slice kubepods-besteffort-pod4415c13f_695d_417e_aae5_3c41f5335364.slice - libcontainer container kubepods-besteffort-pod4415c13f_695d_417e_aae5_3c41f5335364.slice. Apr 24 23:56:51.817632 containerd[1723]: time="2026-04-24T23:56:51.817580204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5ctn6,Uid:5d33530d-69e5-4b9a-82ba-1c6091df548a,Namespace:kube-system,Attempt:0,}" Apr 24 23:56:51.863424 containerd[1723]: time="2026-04-24T23:56:51.862917784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:56:51.863424 containerd[1723]: time="2026-04-24T23:56:51.862984184Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:56:51.863424 containerd[1723]: time="2026-04-24T23:56:51.863005985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:56:51.863424 containerd[1723]: time="2026-04-24T23:56:51.863186487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:56:51.881659 kubelet[3267]: I0424 23:56:51.881619 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4415c13f-695d-417e-aae5-3c41f5335364-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-nngg4\" (UID: \"4415c13f-695d-417e-aae5-3c41f5335364\") " pod="tigera-operator/tigera-operator-6bf85f8dd-nngg4" Apr 24 23:56:51.881659 kubelet[3267]: I0424 23:56:51.881669 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tntlf\" (UniqueName: \"kubernetes.io/projected/4415c13f-695d-417e-aae5-3c41f5335364-kube-api-access-tntlf\") pod \"tigera-operator-6bf85f8dd-nngg4\" (UID: \"4415c13f-695d-417e-aae5-3c41f5335364\") " pod="tigera-operator/tigera-operator-6bf85f8dd-nngg4" Apr 24 23:56:51.892525 systemd[1]: Started cri-containerd-1e2a0bb593bcb5565cc0a8574cd5d44d10d52061b6160fd7e3590a0fa72cebae.scope - libcontainer container 1e2a0bb593bcb5565cc0a8574cd5d44d10d52061b6160fd7e3590a0fa72cebae. Apr 24 23:56:51.914087 containerd[1723]: time="2026-04-24T23:56:51.914043725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5ctn6,Uid:5d33530d-69e5-4b9a-82ba-1c6091df548a,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e2a0bb593bcb5565cc0a8574cd5d44d10d52061b6160fd7e3590a0fa72cebae\"" Apr 24 23:56:51.922991 containerd[1723]: time="2026-04-24T23:56:51.922945819Z" level=info msg="CreateContainer within sandbox \"1e2a0bb593bcb5565cc0a8574cd5d44d10d52061b6160fd7e3590a0fa72cebae\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 24 23:56:51.958546 containerd[1723]: time="2026-04-24T23:56:51.958497895Z" level=info msg="CreateContainer within sandbox \"1e2a0bb593bcb5565cc0a8574cd5d44d10d52061b6160fd7e3590a0fa72cebae\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d1e3a296dacfcc5afe2c39ef26cbc057530ad8521f09465675e7d0578971a7e1\"" Apr 24 23:56:51.959637 containerd[1723]: time="2026-04-24T23:56:51.959572407Z" level=info msg="StartContainer for \"d1e3a296dacfcc5afe2c39ef26cbc057530ad8521f09465675e7d0578971a7e1\"" Apr 24 23:56:51.988534 systemd[1]: Started cri-containerd-d1e3a296dacfcc5afe2c39ef26cbc057530ad8521f09465675e7d0578971a7e1.scope - libcontainer container d1e3a296dacfcc5afe2c39ef26cbc057530ad8521f09465675e7d0578971a7e1. Apr 24 23:56:52.020921 containerd[1723]: time="2026-04-24T23:56:52.020868156Z" level=info msg="StartContainer for \"d1e3a296dacfcc5afe2c39ef26cbc057530ad8521f09465675e7d0578971a7e1\" returns successfully" Apr 24 23:56:52.105490 containerd[1723]: time="2026-04-24T23:56:52.105443851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-nngg4,Uid:4415c13f-695d-417e-aae5-3c41f5335364,Namespace:tigera-operator,Attempt:0,}" Apr 24 23:56:52.167517 containerd[1723]: time="2026-04-24T23:56:52.167220705Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:56:52.168722 containerd[1723]: time="2026-04-24T23:56:52.168512418Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:56:52.168722 containerd[1723]: time="2026-04-24T23:56:52.168562019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:56:52.169385 containerd[1723]: time="2026-04-24T23:56:52.168949523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:56:52.193532 systemd[1]: Started cri-containerd-52fcf861d73554db47755038008b1dd0eac36621d5cba7513d3927cf50555d4a.scope - libcontainer container 52fcf861d73554db47755038008b1dd0eac36621d5cba7513d3927cf50555d4a. Apr 24 23:56:52.197721 kubelet[3267]: I0424 23:56:52.197631 3267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5ctn6" podStartSLOduration=1.197608026 podStartE2EDuration="1.197608026s" podCreationTimestamp="2026-04-24 23:56:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:56:52.197126121 +0000 UTC m=+8.158553744" watchObservedRunningTime="2026-04-24 23:56:52.197608026 +0000 UTC m=+8.159035649" Apr 24 23:56:52.244513 containerd[1723]: time="2026-04-24T23:56:52.244463522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-nngg4,Uid:4415c13f-695d-417e-aae5-3c41f5335364,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"52fcf861d73554db47755038008b1dd0eac36621d5cba7513d3927cf50555d4a\"" Apr 24 23:56:52.246777 containerd[1723]: time="2026-04-24T23:56:52.246046839Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 24 23:56:53.511956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1739665465.mount: Deactivated successfully. Apr 24 23:56:55.089254 containerd[1723]: time="2026-04-24T23:56:55.089149349Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:56:55.091861 containerd[1723]: time="2026-04-24T23:56:55.091795286Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 24 23:56:55.095068 containerd[1723]: time="2026-04-24T23:56:55.095014932Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:56:55.099700 containerd[1723]: time="2026-04-24T23:56:55.099648098Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:56:55.100959 containerd[1723]: time="2026-04-24T23:56:55.100568211Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.854479172s" Apr 24 23:56:55.100959 containerd[1723]: time="2026-04-24T23:56:55.100606512Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 24 23:56:55.108459 containerd[1723]: time="2026-04-24T23:56:55.108425823Z" level=info msg="CreateContainer within sandbox \"52fcf861d73554db47755038008b1dd0eac36621d5cba7513d3927cf50555d4a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 24 23:56:55.141188 containerd[1723]: time="2026-04-24T23:56:55.141149089Z" level=info msg="CreateContainer within sandbox \"52fcf861d73554db47755038008b1dd0eac36621d5cba7513d3927cf50555d4a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3e4c58115632082c593abefdea723344b6f9b6f2d956f854ac15083e41370e73\"" Apr 24 23:56:55.142571 containerd[1723]: time="2026-04-24T23:56:55.141607596Z" level=info msg="StartContainer for \"3e4c58115632082c593abefdea723344b6f9b6f2d956f854ac15083e41370e73\"" Apr 24 23:56:55.174496 systemd[1]: Started cri-containerd-3e4c58115632082c593abefdea723344b6f9b6f2d956f854ac15083e41370e73.scope - libcontainer container 3e4c58115632082c593abefdea723344b6f9b6f2d956f854ac15083e41370e73. Apr 24 23:56:55.209583 containerd[1723]: time="2026-04-24T23:56:55.209539764Z" level=info msg="StartContainer for \"3e4c58115632082c593abefdea723344b6f9b6f2d956f854ac15083e41370e73\" returns successfully" Apr 24 23:56:56.217088 kubelet[3267]: I0424 23:56:56.216718 3267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-nngg4" podStartSLOduration=2.360948922 podStartE2EDuration="5.216701111s" podCreationTimestamp="2026-04-24 23:56:51 +0000 UTC" firstStartedPulling="2026-04-24 23:56:52.245713135 +0000 UTC m=+8.207140658" lastFinishedPulling="2026-04-24 23:56:55.101465224 +0000 UTC m=+11.062892847" observedRunningTime="2026-04-24 23:56:56.216459207 +0000 UTC m=+12.177886730" watchObservedRunningTime="2026-04-24 23:56:56.216701111 +0000 UTC m=+12.178128734" Apr 24 23:57:01.674800 sudo[2371]: pam_unix(sudo:session): session closed for user root Apr 24 23:57:01.692002 sshd[2368]: pam_unix(sshd:session): session closed for user core Apr 24 23:57:01.700003 systemd[1]: sshd@6-10.0.0.29:22-4.175.71.9:52210.service: Deactivated successfully. Apr 24 23:57:01.705238 systemd[1]: session-9.scope: Deactivated successfully. Apr 24 23:57:01.706878 systemd[1]: session-9.scope: Consumed 5.434s CPU time, 156.0M memory peak, 0B memory swap peak. Apr 24 23:57:01.711731 systemd-logind[1712]: Session 9 logged out. Waiting for processes to exit. Apr 24 23:57:01.715406 systemd-logind[1712]: Removed session 9. Apr 24 23:57:05.413923 systemd[1]: Created slice kubepods-besteffort-podb78ae5bc_15e5_4c9e_920f_eea413e440eb.slice - libcontainer container kubepods-besteffort-podb78ae5bc_15e5_4c9e_920f_eea413e440eb.slice. Apr 24 23:57:05.534827 systemd[1]: Created slice kubepods-besteffort-pod3e5d6db0_bf99_4b7e_b0fa_24237e878dd4.slice - libcontainer container kubepods-besteffort-pod3e5d6db0_bf99_4b7e_b0fa_24237e878dd4.slice. Apr 24 23:57:05.571069 kubelet[3267]: I0424 23:57:05.571023 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b78ae5bc-15e5-4c9e-920f-eea413e440eb-tigera-ca-bundle\") pod \"calico-typha-67ffddf5b5-5m2dr\" (UID: \"b78ae5bc-15e5-4c9e-920f-eea413e440eb\") " pod="calico-system/calico-typha-67ffddf5b5-5m2dr" Apr 24 23:57:05.572648 kubelet[3267]: I0424 23:57:05.572548 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b78ae5bc-15e5-4c9e-920f-eea413e440eb-typha-certs\") pod \"calico-typha-67ffddf5b5-5m2dr\" (UID: \"b78ae5bc-15e5-4c9e-920f-eea413e440eb\") " pod="calico-system/calico-typha-67ffddf5b5-5m2dr" Apr 24 23:57:05.572648 kubelet[3267]: I0424 23:57:05.572586 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x2nt\" (UniqueName: \"kubernetes.io/projected/b78ae5bc-15e5-4c9e-920f-eea413e440eb-kube-api-access-6x2nt\") pod \"calico-typha-67ffddf5b5-5m2dr\" (UID: \"b78ae5bc-15e5-4c9e-920f-eea413e440eb\") " pod="calico-system/calico-typha-67ffddf5b5-5m2dr" Apr 24 23:57:05.674923 kubelet[3267]: I0424 23:57:05.673794 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e5d6db0-bf99-4b7e-b0fa-24237e878dd4-xtables-lock\") pod \"calico-node-jtdxm\" (UID: \"3e5d6db0-bf99-4b7e-b0fa-24237e878dd4\") " pod="calico-system/calico-node-jtdxm" Apr 24 23:57:05.674923 kubelet[3267]: I0424 23:57:05.673918 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3e5d6db0-bf99-4b7e-b0fa-24237e878dd4-cni-bin-dir\") pod \"calico-node-jtdxm\" (UID: \"3e5d6db0-bf99-4b7e-b0fa-24237e878dd4\") " pod="calico-system/calico-node-jtdxm" Apr 24 23:57:05.674923 kubelet[3267]: I0424 23:57:05.673940 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3e5d6db0-bf99-4b7e-b0fa-24237e878dd4-cni-net-dir\") pod \"calico-node-jtdxm\" (UID: \"3e5d6db0-bf99-4b7e-b0fa-24237e878dd4\") " pod="calico-system/calico-node-jtdxm" Apr 24 23:57:05.674923 kubelet[3267]: I0424 23:57:05.673956 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/3e5d6db0-bf99-4b7e-b0fa-24237e878dd4-sys-fs\") pod \"calico-node-jtdxm\" (UID: \"3e5d6db0-bf99-4b7e-b0fa-24237e878dd4\") " pod="calico-system/calico-node-jtdxm" Apr 24 23:57:05.674923 kubelet[3267]: I0424 23:57:05.673987 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3e5d6db0-bf99-4b7e-b0fa-24237e878dd4-var-run-calico\") pod \"calico-node-jtdxm\" (UID: \"3e5d6db0-bf99-4b7e-b0fa-24237e878dd4\") " pod="calico-system/calico-node-jtdxm" Apr 24 23:57:05.675250 kubelet[3267]: I0424 23:57:05.674009 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3e5d6db0-bf99-4b7e-b0fa-24237e878dd4-flexvol-driver-host\") pod \"calico-node-jtdxm\" (UID: \"3e5d6db0-bf99-4b7e-b0fa-24237e878dd4\") " pod="calico-system/calico-node-jtdxm" Apr 24 23:57:05.675250 kubelet[3267]: I0424 23:57:05.674027 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/3e5d6db0-bf99-4b7e-b0fa-24237e878dd4-nodeproc\") pod \"calico-node-jtdxm\" (UID: \"3e5d6db0-bf99-4b7e-b0fa-24237e878dd4\") " pod="calico-system/calico-node-jtdxm" Apr 24 23:57:05.675250 kubelet[3267]: I0424 23:57:05.674044 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3e5d6db0-bf99-4b7e-b0fa-24237e878dd4-var-lib-calico\") pod \"calico-node-jtdxm\" (UID: \"3e5d6db0-bf99-4b7e-b0fa-24237e878dd4\") " pod="calico-system/calico-node-jtdxm" Apr 24 23:57:05.675250 kubelet[3267]: I0424 23:57:05.674067 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbtr7\" (UniqueName: \"kubernetes.io/projected/3e5d6db0-bf99-4b7e-b0fa-24237e878dd4-kube-api-access-gbtr7\") pod \"calico-node-jtdxm\" (UID: \"3e5d6db0-bf99-4b7e-b0fa-24237e878dd4\") " pod="calico-system/calico-node-jtdxm" Apr 24 23:57:05.675250 kubelet[3267]: I0424 23:57:05.674154 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e5d6db0-bf99-4b7e-b0fa-24237e878dd4-lib-modules\") pod \"calico-node-jtdxm\" (UID: \"3e5d6db0-bf99-4b7e-b0fa-24237e878dd4\") " pod="calico-system/calico-node-jtdxm" Apr 24 23:57:05.675491 kubelet[3267]: I0424 23:57:05.674205 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3e5d6db0-bf99-4b7e-b0fa-24237e878dd4-policysync\") pod \"calico-node-jtdxm\" (UID: \"3e5d6db0-bf99-4b7e-b0fa-24237e878dd4\") " pod="calico-system/calico-node-jtdxm" Apr 24 23:57:05.675491 kubelet[3267]: I0424 23:57:05.674227 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3e5d6db0-bf99-4b7e-b0fa-24237e878dd4-cni-log-dir\") pod \"calico-node-jtdxm\" (UID: \"3e5d6db0-bf99-4b7e-b0fa-24237e878dd4\") " pod="calico-system/calico-node-jtdxm" Apr 24 23:57:05.675491 kubelet[3267]: I0424 23:57:05.674257 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/3e5d6db0-bf99-4b7e-b0fa-24237e878dd4-bpffs\") pod \"calico-node-jtdxm\" (UID: \"3e5d6db0-bf99-4b7e-b0fa-24237e878dd4\") " pod="calico-system/calico-node-jtdxm" Apr 24 23:57:05.675491 kubelet[3267]: I0424 23:57:05.674279 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3e5d6db0-bf99-4b7e-b0fa-24237e878dd4-node-certs\") pod \"calico-node-jtdxm\" (UID: \"3e5d6db0-bf99-4b7e-b0fa-24237e878dd4\") " pod="calico-system/calico-node-jtdxm" Apr 24 23:57:05.675491 kubelet[3267]: I0424 23:57:05.674306 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e5d6db0-bf99-4b7e-b0fa-24237e878dd4-tigera-ca-bundle\") pod \"calico-node-jtdxm\" (UID: \"3e5d6db0-bf99-4b7e-b0fa-24237e878dd4\") " pod="calico-system/calico-node-jtdxm" Apr 24 23:57:05.680243 kubelet[3267]: E0424 23:57:05.680201 3267 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwxs4" podUID="ba5ea344-e24c-488b-ad7b-af64eecfd3fe" Apr 24 23:57:05.725177 containerd[1723]: time="2026-04-24T23:57:05.725053217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67ffddf5b5-5m2dr,Uid:b78ae5bc-15e5-4c9e-920f-eea413e440eb,Namespace:calico-system,Attempt:0,}" Apr 24 23:57:05.775519 kubelet[3267]: I0424 23:57:05.774878 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ba5ea344-e24c-488b-ad7b-af64eecfd3fe-registration-dir\") pod \"csi-node-driver-rwxs4\" (UID: \"ba5ea344-e24c-488b-ad7b-af64eecfd3fe\") " pod="calico-system/csi-node-driver-rwxs4" Apr 24 23:57:05.775519 kubelet[3267]: I0424 23:57:05.774956 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ba5ea344-e24c-488b-ad7b-af64eecfd3fe-varrun\") pod \"csi-node-driver-rwxs4\" (UID: \"ba5ea344-e24c-488b-ad7b-af64eecfd3fe\") " pod="calico-system/csi-node-driver-rwxs4" Apr 24 23:57:05.775519 kubelet[3267]: I0424 23:57:05.774985 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjqp4\" (UniqueName: \"kubernetes.io/projected/ba5ea344-e24c-488b-ad7b-af64eecfd3fe-kube-api-access-sjqp4\") pod \"csi-node-driver-rwxs4\" (UID: \"ba5ea344-e24c-488b-ad7b-af64eecfd3fe\") " pod="calico-system/csi-node-driver-rwxs4" Apr 24 23:57:05.775519 kubelet[3267]: I0424 23:57:05.775101 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ba5ea344-e24c-488b-ad7b-af64eecfd3fe-kubelet-dir\") pod \"csi-node-driver-rwxs4\" (UID: \"ba5ea344-e24c-488b-ad7b-af64eecfd3fe\") " pod="calico-system/csi-node-driver-rwxs4" Apr 24 23:57:05.775519 kubelet[3267]: I0424 23:57:05.775128 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ba5ea344-e24c-488b-ad7b-af64eecfd3fe-socket-dir\") pod \"csi-node-driver-rwxs4\" (UID: \"ba5ea344-e24c-488b-ad7b-af64eecfd3fe\") " pod="calico-system/csi-node-driver-rwxs4" Apr 24 23:57:05.779957 kubelet[3267]: E0424 23:57:05.779337 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:05.779957 kubelet[3267]: W0424 23:57:05.779385 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:05.779957 kubelet[3267]: E0424 23:57:05.779409 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:05.779957 kubelet[3267]: E0424 23:57:05.779692 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:05.779957 kubelet[3267]: W0424 23:57:05.779704 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:05.779957 kubelet[3267]: E0424 23:57:05.779718 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:05.781582 containerd[1723]: time="2026-04-24T23:57:05.781274707Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:57:05.781582 containerd[1723]: time="2026-04-24T23:57:05.781365008Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:57:05.781582 containerd[1723]: time="2026-04-24T23:57:05.781382408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:57:05.782586 containerd[1723]: time="2026-04-24T23:57:05.782311621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:57:05.787054 kubelet[3267]: E0424 23:57:05.787026 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:05.787238 kubelet[3267]: W0424 23:57:05.787145 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:05.787238 kubelet[3267]: E0424 23:57:05.787161 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:05.800713 kubelet[3267]: E0424 23:57:05.800564 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:05.800713 kubelet[3267]: W0424 23:57:05.800583 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:05.800713 kubelet[3267]: E0424 23:57:05.800617 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:05.817538 systemd[1]: Started cri-containerd-3d5cb38cb0c9c03f8c13e7d9ffe61bdf5eb82ce87b81096f0a150e2e32a77706.scope - libcontainer container 3d5cb38cb0c9c03f8c13e7d9ffe61bdf5eb82ce87b81096f0a150e2e32a77706. Apr 24 23:57:05.841563 containerd[1723]: time="2026-04-24T23:57:05.841519653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jtdxm,Uid:3e5d6db0-bf99-4b7e-b0fa-24237e878dd4,Namespace:calico-system,Attempt:0,}" Apr 24 23:57:05.863238 containerd[1723]: time="2026-04-24T23:57:05.863185057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67ffddf5b5-5m2dr,Uid:b78ae5bc-15e5-4c9e-920f-eea413e440eb,Namespace:calico-system,Attempt:0,} returns sandbox id \"3d5cb38cb0c9c03f8c13e7d9ffe61bdf5eb82ce87b81096f0a150e2e32a77706\"" Apr 24 23:57:05.864907 containerd[1723]: time="2026-04-24T23:57:05.864835180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 24 23:57:05.876680 kubelet[3267]: E0424 23:57:05.876458 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:05.876680 kubelet[3267]: W0424 23:57:05.876485 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:05.876680 kubelet[3267]: E0424 23:57:05.876510 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:05.877650 kubelet[3267]: E0424 23:57:05.877138 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:05.877650 kubelet[3267]: W0424 23:57:05.877154 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:05.877650 kubelet[3267]: E0424 23:57:05.877171 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:05.877650 kubelet[3267]: E0424 23:57:05.877549 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:05.877650 kubelet[3267]: W0424 23:57:05.877563 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:05.877650 kubelet[3267]: E0424 23:57:05.877579 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:05.877995 kubelet[3267]: E0424 23:57:05.877833 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:05.877995 kubelet[3267]: W0424 23:57:05.877845 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:05.877995 kubelet[3267]: E0424 23:57:05.877857 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:05.878119 kubelet[3267]: E0424 23:57:05.878082 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:05.878119 kubelet[3267]: W0424 23:57:05.878096 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:05.878119 kubelet[3267]: E0424 23:57:05.878109 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:05.880199 kubelet[3267]: E0424 23:57:05.878404 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:05.880199 kubelet[3267]: W0424 23:57:05.878418 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:05.880199 kubelet[3267]: E0424 23:57:05.878433 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:05.880199 kubelet[3267]: E0424 23:57:05.878686 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:05.880199 kubelet[3267]: W0424 23:57:05.878697 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:05.880199 kubelet[3267]: E0424 23:57:05.878709 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:05.880199 kubelet[3267]: E0424 23:57:05.878992 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:05.880199 kubelet[3267]: W0424 23:57:05.879004 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:05.880199 kubelet[3267]: E0424 23:57:05.879016 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:05.880199 kubelet[3267]: E0424 23:57:05.879315 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:05.881814 kubelet[3267]: W0424 23:57:05.879326 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:05.881814 kubelet[3267]: E0424 23:57:05.879358 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:05.881814 kubelet[3267]: E0424 23:57:05.879600 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:05.881814 kubelet[3267]: W0424 23:57:05.879610 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:05.881814 kubelet[3267]: E0424 23:57:05.879622 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:05.881814 kubelet[3267]: E0424 23:57:05.879953 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:05.881814 kubelet[3267]: W0424 23:57:05.879964 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:05.881814 kubelet[3267]: E0424 23:57:05.879977 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:05.881814 kubelet[3267]: E0424 23:57:05.880423 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:05.881814 kubelet[3267]: W0424 23:57:05.880436 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:05.883523 kubelet[3267]: E0424 23:57:05.880450 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:05.883523 kubelet[3267]: E0424 23:57:05.881181 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:05.883523 kubelet[3267]: W0424 23:57:05.881195 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:05.883523 kubelet[3267]: E0424 23:57:05.881225 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:05.883523 kubelet[3267]: E0424 23:57:05.881947 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:05.883523 kubelet[3267]: W0424 23:57:05.881961 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:05.883523 kubelet[3267]: E0424 23:57:05.881975 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:05.883523 kubelet[3267]: E0424 23:57:05.882429 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:05.883523 kubelet[3267]: W0424 23:57:05.882441 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:05.883523 kubelet[3267]: E0424 23:57:05.882465 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:05.883966 kubelet[3267]: E0424 23:57:05.882794 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:05.883966 kubelet[3267]: W0424 23:57:05.882823 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:05.883966 kubelet[3267]: E0424 23:57:05.882838 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:05.883966 kubelet[3267]: E0424 23:57:05.883132 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:05.883966 kubelet[3267]: W0424 23:57:05.883145 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:05.883966 kubelet[3267]: E0424 23:57:05.883180 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:05.883966 kubelet[3267]: E0424 23:57:05.883556 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:05.883966 kubelet[3267]: W0424 23:57:05.883569 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:05.883966 kubelet[3267]: E0424 23:57:05.883591 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:05.883966 kubelet[3267]: E0424 23:57:05.883862 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:05.884547 kubelet[3267]: W0424 23:57:05.883881 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:05.884547 kubelet[3267]: E0424 23:57:05.883894 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:05.884547 kubelet[3267]: E0424 23:57:05.884207 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:05.884547 kubelet[3267]: W0424 23:57:05.884218 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:05.884547 kubelet[3267]: E0424 23:57:05.884246 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:05.884547 kubelet[3267]: E0424 23:57:05.884525 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:05.884547 kubelet[3267]: W0424 23:57:05.884536 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:05.884547 kubelet[3267]: E0424 23:57:05.884549 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:05.884881 kubelet[3267]: E0424 23:57:05.884843 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:05.884881 kubelet[3267]: W0424 23:57:05.884854 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:05.884881 kubelet[3267]: E0424 23:57:05.884867 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:05.885857 kubelet[3267]: E0424 23:57:05.885161 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:05.885857 kubelet[3267]: W0424 23:57:05.885175 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:05.885857 kubelet[3267]: E0424 23:57:05.885189 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:05.885857 kubelet[3267]: E0424 23:57:05.885640 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:05.885857 kubelet[3267]: W0424 23:57:05.885652 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:05.885857 kubelet[3267]: E0424 23:57:05.885665 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:05.887513 kubelet[3267]: E0424 23:57:05.887446 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:05.887513 kubelet[3267]: W0424 23:57:05.887460 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:05.887513 kubelet[3267]: E0424 23:57:05.887474 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:05.896658 containerd[1723]: time="2026-04-24T23:57:05.896165520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:57:05.896658 containerd[1723]: time="2026-04-24T23:57:05.896239021Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:57:05.896658 containerd[1723]: time="2026-04-24T23:57:05.896256321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:57:05.896658 containerd[1723]: time="2026-04-24T23:57:05.896427324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:57:05.899399 kubelet[3267]: E0424 23:57:05.899267 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:05.899399 kubelet[3267]: W0424 23:57:05.899284 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:05.899399 kubelet[3267]: E0424 23:57:05.899306 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:05.919556 systemd[1]: Started cri-containerd-9f3989fe3e47f63418d6520d44148b16e94710b21d708fdf9e4a9a890f673979.scope - libcontainer container 9f3989fe3e47f63418d6520d44148b16e94710b21d708fdf9e4a9a890f673979. Apr 24 23:57:05.947288 containerd[1723]: time="2026-04-24T23:57:05.947132336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jtdxm,Uid:3e5d6db0-bf99-4b7e-b0fa-24237e878dd4,Namespace:calico-system,Attempt:0,} returns sandbox id \"9f3989fe3e47f63418d6520d44148b16e94710b21d708fdf9e4a9a890f673979\"" Apr 24 23:57:07.138726 kubelet[3267]: E0424 23:57:07.138651 3267 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwxs4" podUID="ba5ea344-e24c-488b-ad7b-af64eecfd3fe" Apr 24 23:57:09.138857 kubelet[3267]: E0424 23:57:09.137603 3267 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwxs4" podUID="ba5ea344-e24c-488b-ad7b-af64eecfd3fe" Apr 24 23:57:11.138720 kubelet[3267]: E0424 23:57:11.138648 3267 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwxs4" podUID="ba5ea344-e24c-488b-ad7b-af64eecfd3fe" Apr 24 23:57:13.138356 kubelet[3267]: E0424 23:57:13.138282 3267 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwxs4" podUID="ba5ea344-e24c-488b-ad7b-af64eecfd3fe" Apr 24 23:57:14.391027 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1995441731.mount: Deactivated successfully. Apr 24 23:57:15.042870 containerd[1723]: time="2026-04-24T23:57:15.042815623Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:57:15.045681 containerd[1723]: time="2026-04-24T23:57:15.045551158Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 24 23:57:15.048818 containerd[1723]: time="2026-04-24T23:57:15.048753298Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:57:15.053424 containerd[1723]: time="2026-04-24T23:57:15.053361057Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:57:15.054459 containerd[1723]: time="2026-04-24T23:57:15.054029665Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 9.189148584s" Apr 24 23:57:15.054459 containerd[1723]: time="2026-04-24T23:57:15.054066366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 24 23:57:15.056937 containerd[1723]: time="2026-04-24T23:57:15.055826788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 24 23:57:15.073763 containerd[1723]: time="2026-04-24T23:57:15.073610713Z" level=info msg="CreateContainer within sandbox \"3d5cb38cb0c9c03f8c13e7d9ffe61bdf5eb82ce87b81096f0a150e2e32a77706\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 24 23:57:15.105284 containerd[1723]: time="2026-04-24T23:57:15.105230613Z" level=info msg="CreateContainer within sandbox \"3d5cb38cb0c9c03f8c13e7d9ffe61bdf5eb82ce87b81096f0a150e2e32a77706\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2332d199fbb91813080bb0ad387270027a89e6b804b45cebfe5f5d94c254bd39\"" Apr 24 23:57:15.105958 containerd[1723]: time="2026-04-24T23:57:15.105915222Z" level=info msg="StartContainer for \"2332d199fbb91813080bb0ad387270027a89e6b804b45cebfe5f5d94c254bd39\"" Apr 24 23:57:15.135521 systemd[1]: Started cri-containerd-2332d199fbb91813080bb0ad387270027a89e6b804b45cebfe5f5d94c254bd39.scope - libcontainer container 2332d199fbb91813080bb0ad387270027a89e6b804b45cebfe5f5d94c254bd39. Apr 24 23:57:15.138377 kubelet[3267]: E0424 23:57:15.137627 3267 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwxs4" podUID="ba5ea344-e24c-488b-ad7b-af64eecfd3fe" Apr 24 23:57:15.183530 containerd[1723]: time="2026-04-24T23:57:15.183061099Z" level=info msg="StartContainer for \"2332d199fbb91813080bb0ad387270027a89e6b804b45cebfe5f5d94c254bd39\" returns successfully" Apr 24 23:57:15.260516 kubelet[3267]: I0424 23:57:15.260439 3267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-67ffddf5b5-5m2dr" podStartSLOduration=1.069563471 podStartE2EDuration="10.260416978s" podCreationTimestamp="2026-04-24 23:57:05 +0000 UTC" firstStartedPulling="2026-04-24 23:57:05.864461475 +0000 UTC m=+21.825889098" lastFinishedPulling="2026-04-24 23:57:15.055314982 +0000 UTC m=+31.016742605" observedRunningTime="2026-04-24 23:57:15.258536854 +0000 UTC m=+31.219964377" watchObservedRunningTime="2026-04-24 23:57:15.260416978 +0000 UTC m=+31.221844501" Apr 24 23:57:15.338679 kubelet[3267]: E0424 23:57:15.338445 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.338679 kubelet[3267]: W0424 23:57:15.338476 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.338679 kubelet[3267]: E0424 23:57:15.338501 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.338976 kubelet[3267]: E0424 23:57:15.338944 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.339300 kubelet[3267]: W0424 23:57:15.338958 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.339300 kubelet[3267]: E0424 23:57:15.339084 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.339776 kubelet[3267]: E0424 23:57:15.339425 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.339776 kubelet[3267]: W0424 23:57:15.339436 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.339776 kubelet[3267]: E0424 23:57:15.339452 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.339919 kubelet[3267]: E0424 23:57:15.339894 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.339919 kubelet[3267]: W0424 23:57:15.339906 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.340670 kubelet[3267]: E0424 23:57:15.340019 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.341531 kubelet[3267]: E0424 23:57:15.341431 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.341531 kubelet[3267]: W0424 23:57:15.341448 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.341531 kubelet[3267]: E0424 23:57:15.341461 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.343174 kubelet[3267]: E0424 23:57:15.343127 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.343174 kubelet[3267]: W0424 23:57:15.343144 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.343174 kubelet[3267]: E0424 23:57:15.343158 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.343534 kubelet[3267]: E0424 23:57:15.343458 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.343534 kubelet[3267]: W0424 23:57:15.343472 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.343534 kubelet[3267]: E0424 23:57:15.343486 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.344469 kubelet[3267]: E0424 23:57:15.344445 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.344469 kubelet[3267]: W0424 23:57:15.344466 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.344594 kubelet[3267]: E0424 23:57:15.344481 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.347464 kubelet[3267]: E0424 23:57:15.347429 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.347464 kubelet[3267]: W0424 23:57:15.347449 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.347464 kubelet[3267]: E0424 23:57:15.347464 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.347747 kubelet[3267]: E0424 23:57:15.347673 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.347747 kubelet[3267]: W0424 23:57:15.347684 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.347747 kubelet[3267]: E0424 23:57:15.347696 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.347920 kubelet[3267]: E0424 23:57:15.347902 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.347990 kubelet[3267]: W0424 23:57:15.347921 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.347990 kubelet[3267]: E0424 23:57:15.347934 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.348194 kubelet[3267]: E0424 23:57:15.348174 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.348252 kubelet[3267]: W0424 23:57:15.348195 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.348252 kubelet[3267]: E0424 23:57:15.348208 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.350449 kubelet[3267]: E0424 23:57:15.348497 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.350449 kubelet[3267]: W0424 23:57:15.348512 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.350449 kubelet[3267]: E0424 23:57:15.348524 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.350449 kubelet[3267]: E0424 23:57:15.348826 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.350449 kubelet[3267]: W0424 23:57:15.348839 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.350449 kubelet[3267]: E0424 23:57:15.348852 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.350449 kubelet[3267]: E0424 23:57:15.349687 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.350449 kubelet[3267]: W0424 23:57:15.349702 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.350449 kubelet[3267]: E0424 23:57:15.349715 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.360452 kubelet[3267]: E0424 23:57:15.360428 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.360452 kubelet[3267]: W0424 23:57:15.360449 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.360581 kubelet[3267]: E0424 23:57:15.360465 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.361507 kubelet[3267]: E0424 23:57:15.361483 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.361507 kubelet[3267]: W0424 23:57:15.361503 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.361632 kubelet[3267]: E0424 23:57:15.361519 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.363476 kubelet[3267]: E0424 23:57:15.363439 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.363476 kubelet[3267]: W0424 23:57:15.363474 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.363610 kubelet[3267]: E0424 23:57:15.363488 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.363926 kubelet[3267]: E0424 23:57:15.363794 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.363926 kubelet[3267]: W0424 23:57:15.363807 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.363926 kubelet[3267]: E0424 23:57:15.363820 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.365008 kubelet[3267]: E0424 23:57:15.364993 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.365161 kubelet[3267]: W0424 23:57:15.365083 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.365161 kubelet[3267]: E0424 23:57:15.365099 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.365588 kubelet[3267]: E0424 23:57:15.365466 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.365588 kubelet[3267]: W0424 23:57:15.365480 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.365588 kubelet[3267]: E0424 23:57:15.365494 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.366024 kubelet[3267]: E0424 23:57:15.365900 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.366024 kubelet[3267]: W0424 23:57:15.365913 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.366024 kubelet[3267]: E0424 23:57:15.365928 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.366495 kubelet[3267]: E0424 23:57:15.366357 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.366495 kubelet[3267]: W0424 23:57:15.366374 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.366495 kubelet[3267]: E0424 23:57:15.366387 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.366840 kubelet[3267]: E0424 23:57:15.366579 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.366840 kubelet[3267]: W0424 23:57:15.366589 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.366840 kubelet[3267]: E0424 23:57:15.366600 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.367659 kubelet[3267]: E0424 23:57:15.367117 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.367659 kubelet[3267]: W0424 23:57:15.367130 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.367659 kubelet[3267]: E0424 23:57:15.367142 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.368530 kubelet[3267]: E0424 23:57:15.368515 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.368628 kubelet[3267]: W0424 23:57:15.368615 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.368712 kubelet[3267]: E0424 23:57:15.368696 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.370634 kubelet[3267]: E0424 23:57:15.370620 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.370750 kubelet[3267]: W0424 23:57:15.370738 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.370870 kubelet[3267]: E0424 23:57:15.370810 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.371108 kubelet[3267]: E0424 23:57:15.371098 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.371260 kubelet[3267]: W0424 23:57:15.371170 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.371260 kubelet[3267]: E0424 23:57:15.371184 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.371572 kubelet[3267]: E0424 23:57:15.371533 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.371572 kubelet[3267]: W0424 23:57:15.371548 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.371572 kubelet[3267]: E0424 23:57:15.371559 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.372369 kubelet[3267]: E0424 23:57:15.372159 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.372369 kubelet[3267]: W0424 23:57:15.372172 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.372369 kubelet[3267]: E0424 23:57:15.372185 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.373647 kubelet[3267]: E0424 23:57:15.373537 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.373647 kubelet[3267]: W0424 23:57:15.373553 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.373647 kubelet[3267]: E0424 23:57:15.373567 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.375865 kubelet[3267]: E0424 23:57:15.375552 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.375865 kubelet[3267]: W0424 23:57:15.375567 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.375865 kubelet[3267]: E0424 23:57:15.375581 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.376434 kubelet[3267]: E0424 23:57:15.376419 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.376530 kubelet[3267]: W0424 23:57:15.376519 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.377121 kubelet[3267]: E0424 23:57:15.377100 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.242260 kubelet[3267]: I0424 23:57:16.242215 3267 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:57:16.254856 kubelet[3267]: E0424 23:57:16.254543 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.254856 kubelet[3267]: W0424 23:57:16.254566 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.254856 kubelet[3267]: E0424 23:57:16.254600 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.254856 kubelet[3267]: E0424 23:57:16.254849 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.254856 kubelet[3267]: W0424 23:57:16.254862 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.255189 kubelet[3267]: E0424 23:57:16.254875 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.255189 kubelet[3267]: E0424 23:57:16.255183 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.255273 kubelet[3267]: W0424 23:57:16.255195 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.255273 kubelet[3267]: E0424 23:57:16.255208 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.255591 kubelet[3267]: E0424 23:57:16.255574 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.255687 kubelet[3267]: W0424 23:57:16.255633 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.255687 kubelet[3267]: E0424 23:57:16.255654 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.256072 kubelet[3267]: E0424 23:57:16.256053 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.256072 kubelet[3267]: W0424 23:57:16.256069 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.256209 kubelet[3267]: E0424 23:57:16.256084 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.256378 kubelet[3267]: E0424 23:57:16.256299 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.256378 kubelet[3267]: W0424 23:57:16.256311 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.256378 kubelet[3267]: E0424 23:57:16.256324 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.256610 kubelet[3267]: E0424 23:57:16.256589 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.256610 kubelet[3267]: W0424 23:57:16.256606 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.256733 kubelet[3267]: E0424 23:57:16.256619 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.256916 kubelet[3267]: E0424 23:57:16.256882 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.256916 kubelet[3267]: W0424 23:57:16.256906 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.257210 kubelet[3267]: E0424 23:57:16.256919 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.257355 kubelet[3267]: E0424 23:57:16.257326 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.257411 kubelet[3267]: W0424 23:57:16.257365 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.257411 kubelet[3267]: E0424 23:57:16.257380 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.257728 kubelet[3267]: E0424 23:57:16.257712 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.257728 kubelet[3267]: W0424 23:57:16.257727 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.257828 kubelet[3267]: E0424 23:57:16.257741 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.258065 kubelet[3267]: E0424 23:57:16.258024 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.258065 kubelet[3267]: W0424 23:57:16.258057 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.258179 kubelet[3267]: E0424 23:57:16.258071 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.258523 kubelet[3267]: E0424 23:57:16.258504 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.258523 kubelet[3267]: W0424 23:57:16.258520 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.258726 kubelet[3267]: E0424 23:57:16.258534 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.258871 kubelet[3267]: E0424 23:57:16.258855 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.258871 kubelet[3267]: W0424 23:57:16.258869 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.259116 kubelet[3267]: E0424 23:57:16.258883 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.259250 kubelet[3267]: E0424 23:57:16.259201 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.259250 kubelet[3267]: W0424 23:57:16.259215 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.259250 kubelet[3267]: E0424 23:57:16.259228 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.259728 kubelet[3267]: E0424 23:57:16.259641 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.259728 kubelet[3267]: W0424 23:57:16.259657 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.259728 kubelet[3267]: E0424 23:57:16.259667 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.270251 kubelet[3267]: E0424 23:57:16.270199 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.270251 kubelet[3267]: W0424 23:57:16.270214 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.270251 kubelet[3267]: E0424 23:57:16.270229 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.271046 kubelet[3267]: E0424 23:57:16.270942 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.271189 kubelet[3267]: W0424 23:57:16.270955 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.271189 kubelet[3267]: E0424 23:57:16.271140 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.271456 kubelet[3267]: E0424 23:57:16.271434 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.271456 kubelet[3267]: W0424 23:57:16.271448 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.271574 kubelet[3267]: E0424 23:57:16.271462 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.271723 kubelet[3267]: E0424 23:57:16.271715 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.271800 kubelet[3267]: W0424 23:57:16.271727 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.271800 kubelet[3267]: E0424 23:57:16.271745 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.272261 kubelet[3267]: E0424 23:57:16.272194 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.272261 kubelet[3267]: W0424 23:57:16.272208 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.272261 kubelet[3267]: E0424 23:57:16.272221 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.272833 kubelet[3267]: E0424 23:57:16.272767 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.272833 kubelet[3267]: W0424 23:57:16.272780 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.272833 kubelet[3267]: E0424 23:57:16.272793 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.273259 kubelet[3267]: E0424 23:57:16.273248 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.273403 kubelet[3267]: W0424 23:57:16.273312 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.273403 kubelet[3267]: E0424 23:57:16.273326 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.273658 kubelet[3267]: E0424 23:57:16.273636 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.273658 kubelet[3267]: W0424 23:57:16.273656 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.273850 kubelet[3267]: E0424 23:57:16.273670 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.276391 kubelet[3267]: E0424 23:57:16.275006 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.276391 kubelet[3267]: W0424 23:57:16.275020 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.276391 kubelet[3267]: E0424 23:57:16.275033 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.276391 kubelet[3267]: E0424 23:57:16.275326 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.276391 kubelet[3267]: W0424 23:57:16.275339 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.276391 kubelet[3267]: E0424 23:57:16.275378 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.276391 kubelet[3267]: E0424 23:57:16.275624 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.276391 kubelet[3267]: W0424 23:57:16.275635 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.276391 kubelet[3267]: E0424 23:57:16.275647 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.276391 kubelet[3267]: E0424 23:57:16.275896 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.276828 kubelet[3267]: W0424 23:57:16.275907 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.276828 kubelet[3267]: E0424 23:57:16.275920 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.276828 kubelet[3267]: E0424 23:57:16.276173 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.276828 kubelet[3267]: W0424 23:57:16.276188 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.276828 kubelet[3267]: E0424 23:57:16.276202 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.276828 kubelet[3267]: E0424 23:57:16.276492 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.276828 kubelet[3267]: W0424 23:57:16.276503 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.276828 kubelet[3267]: E0424 23:57:16.276516 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.277179 kubelet[3267]: E0424 23:57:16.276963 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.277179 kubelet[3267]: W0424 23:57:16.276976 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.277179 kubelet[3267]: E0424 23:57:16.276990 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.277301 kubelet[3267]: E0424 23:57:16.277243 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.277301 kubelet[3267]: W0424 23:57:16.277253 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.277301 kubelet[3267]: E0424 23:57:16.277265 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.278856 kubelet[3267]: E0424 23:57:16.278044 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.278856 kubelet[3267]: W0424 23:57:16.278061 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.278856 kubelet[3267]: E0424 23:57:16.278074 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.278856 kubelet[3267]: E0424 23:57:16.278312 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.278856 kubelet[3267]: W0424 23:57:16.278321 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.278856 kubelet[3267]: E0424 23:57:16.278332 3267 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.399550 containerd[1723]: time="2026-04-24T23:57:16.399492596Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:57:16.401904 containerd[1723]: time="2026-04-24T23:57:16.401838826Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 24 23:57:16.404629 containerd[1723]: time="2026-04-24T23:57:16.404587560Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:57:16.408776 containerd[1723]: time="2026-04-24T23:57:16.408720313Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:57:16.410027 containerd[1723]: time="2026-04-24T23:57:16.409394121Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.353510032s" Apr 24 23:57:16.410027 containerd[1723]: time="2026-04-24T23:57:16.409436922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 24 23:57:16.416189 containerd[1723]: time="2026-04-24T23:57:16.416154607Z" level=info msg="CreateContainer within sandbox \"9f3989fe3e47f63418d6520d44148b16e94710b21d708fdf9e4a9a890f673979\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 24 23:57:16.452845 containerd[1723]: time="2026-04-24T23:57:16.452799071Z" level=info msg="CreateContainer within sandbox \"9f3989fe3e47f63418d6520d44148b16e94710b21d708fdf9e4a9a890f673979\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f03045d610a2322ffce0efb9e9d0081fd9560261d6d48b4fdb37da76b1b64b28\"" Apr 24 23:57:16.453646 containerd[1723]: time="2026-04-24T23:57:16.453597381Z" level=info msg="StartContainer for \"f03045d610a2322ffce0efb9e9d0081fd9560261d6d48b4fdb37da76b1b64b28\"" Apr 24 23:57:16.489491 systemd[1]: Started cri-containerd-f03045d610a2322ffce0efb9e9d0081fd9560261d6d48b4fdb37da76b1b64b28.scope - libcontainer container f03045d610a2322ffce0efb9e9d0081fd9560261d6d48b4fdb37da76b1b64b28. Apr 24 23:57:16.519479 containerd[1723]: time="2026-04-24T23:57:16.519369113Z" level=info msg="StartContainer for \"f03045d610a2322ffce0efb9e9d0081fd9560261d6d48b4fdb37da76b1b64b28\" returns successfully" Apr 24 23:57:16.531427 systemd[1]: cri-containerd-f03045d610a2322ffce0efb9e9d0081fd9560261d6d48b4fdb37da76b1b64b28.scope: Deactivated successfully. Apr 24 23:57:16.555673 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f03045d610a2322ffce0efb9e9d0081fd9560261d6d48b4fdb37da76b1b64b28-rootfs.mount: Deactivated successfully. Apr 24 23:57:17.625539 kubelet[3267]: E0424 23:57:17.137875 3267 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwxs4" podUID="ba5ea344-e24c-488b-ad7b-af64eecfd3fe" Apr 24 23:57:18.433993 containerd[1723]: time="2026-04-24T23:57:18.433881347Z" level=info msg="shim disconnected" id=f03045d610a2322ffce0efb9e9d0081fd9560261d6d48b4fdb37da76b1b64b28 namespace=k8s.io Apr 24 23:57:18.433993 containerd[1723]: time="2026-04-24T23:57:18.433989448Z" level=warning msg="cleaning up after shim disconnected" id=f03045d610a2322ffce0efb9e9d0081fd9560261d6d48b4fdb37da76b1b64b28 namespace=k8s.io Apr 24 23:57:18.433993 containerd[1723]: time="2026-04-24T23:57:18.434002949Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:57:18.791335 kubelet[3267]: I0424 23:57:18.790730 3267 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:57:19.138096 kubelet[3267]: E0424 23:57:19.138014 3267 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwxs4" podUID="ba5ea344-e24c-488b-ad7b-af64eecfd3fe" Apr 24 23:57:19.254389 containerd[1723]: time="2026-04-24T23:57:19.253434021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 24 23:57:21.138102 kubelet[3267]: E0424 23:57:21.138056 3267 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwxs4" podUID="ba5ea344-e24c-488b-ad7b-af64eecfd3fe" Apr 24 23:57:23.138632 kubelet[3267]: E0424 23:57:23.138578 3267 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwxs4" podUID="ba5ea344-e24c-488b-ad7b-af64eecfd3fe" Apr 24 23:57:25.138500 kubelet[3267]: E0424 23:57:25.138006 3267 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwxs4" podUID="ba5ea344-e24c-488b-ad7b-af64eecfd3fe" Apr 24 23:57:27.139066 kubelet[3267]: E0424 23:57:27.139006 3267 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwxs4" podUID="ba5ea344-e24c-488b-ad7b-af64eecfd3fe" Apr 24 23:57:27.988151 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount853038411.mount: Deactivated successfully. Apr 24 23:57:28.041609 containerd[1723]: time="2026-04-24T23:57:28.041542364Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:57:28.044110 containerd[1723]: time="2026-04-24T23:57:28.043909596Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 24 23:57:28.047502 containerd[1723]: time="2026-04-24T23:57:28.047468545Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:57:28.051746 containerd[1723]: time="2026-04-24T23:57:28.051696903Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:57:28.053025 containerd[1723]: time="2026-04-24T23:57:28.052422412Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 8.798939091s" Apr 24 23:57:28.053025 containerd[1723]: time="2026-04-24T23:57:28.052464813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 24 23:57:28.060569 containerd[1723]: time="2026-04-24T23:57:28.060540323Z" level=info msg="CreateContainer within sandbox \"9f3989fe3e47f63418d6520d44148b16e94710b21d708fdf9e4a9a890f673979\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 24 23:57:28.095086 containerd[1723]: time="2026-04-24T23:57:28.095041093Z" level=info msg="CreateContainer within sandbox \"9f3989fe3e47f63418d6520d44148b16e94710b21d708fdf9e4a9a890f673979\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"6078fe72b935130e8b2aa759b1c0534568649d93b6ceb53a1655d02c4abc33ce\"" Apr 24 23:57:28.095834 containerd[1723]: time="2026-04-24T23:57:28.095792403Z" level=info msg="StartContainer for \"6078fe72b935130e8b2aa759b1c0534568649d93b6ceb53a1655d02c4abc33ce\"" Apr 24 23:57:28.131520 systemd[1]: Started cri-containerd-6078fe72b935130e8b2aa759b1c0534568649d93b6ceb53a1655d02c4abc33ce.scope - libcontainer container 6078fe72b935130e8b2aa759b1c0534568649d93b6ceb53a1655d02c4abc33ce. Apr 24 23:57:28.170012 containerd[1723]: time="2026-04-24T23:57:28.169967413Z" level=info msg="StartContainer for \"6078fe72b935130e8b2aa759b1c0534568649d93b6ceb53a1655d02c4abc33ce\" returns successfully" Apr 24 23:57:28.213324 systemd[1]: cri-containerd-6078fe72b935130e8b2aa759b1c0534568649d93b6ceb53a1655d02c4abc33ce.scope: Deactivated successfully. Apr 24 23:57:28.988521 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6078fe72b935130e8b2aa759b1c0534568649d93b6ceb53a1655d02c4abc33ce-rootfs.mount: Deactivated successfully. Apr 24 23:57:29.137927 kubelet[3267]: E0424 23:57:29.137850 3267 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwxs4" podUID="ba5ea344-e24c-488b-ad7b-af64eecfd3fe" Apr 24 23:57:31.137747 kubelet[3267]: E0424 23:57:31.137674 3267 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwxs4" podUID="ba5ea344-e24c-488b-ad7b-af64eecfd3fe" Apr 24 23:57:31.477312 containerd[1723]: time="2026-04-24T23:57:31.477124952Z" level=info msg="shim disconnected" id=6078fe72b935130e8b2aa759b1c0534568649d93b6ceb53a1655d02c4abc33ce namespace=k8s.io Apr 24 23:57:31.477312 containerd[1723]: time="2026-04-24T23:57:31.477202153Z" level=warning msg="cleaning up after shim disconnected" id=6078fe72b935130e8b2aa759b1c0534568649d93b6ceb53a1655d02c4abc33ce namespace=k8s.io Apr 24 23:57:31.477312 containerd[1723]: time="2026-04-24T23:57:31.477213553Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:57:32.289181 containerd[1723]: time="2026-04-24T23:57:32.289136698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 24 23:57:33.138431 kubelet[3267]: E0424 23:57:33.138372 3267 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwxs4" podUID="ba5ea344-e24c-488b-ad7b-af64eecfd3fe" Apr 24 23:57:35.139687 kubelet[3267]: E0424 23:57:35.138505 3267 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwxs4" podUID="ba5ea344-e24c-488b-ad7b-af64eecfd3fe" Apr 24 23:57:36.194863 containerd[1723]: time="2026-04-24T23:57:36.194805487Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:57:36.197550 containerd[1723]: time="2026-04-24T23:57:36.197365122Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 24 23:57:36.200435 containerd[1723]: time="2026-04-24T23:57:36.200369264Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:57:36.204666 containerd[1723]: time="2026-04-24T23:57:36.204554722Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:57:36.205740 containerd[1723]: time="2026-04-24T23:57:36.205583636Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 3.916397338s" Apr 24 23:57:36.205740 containerd[1723]: time="2026-04-24T23:57:36.205616636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 24 23:57:36.213750 containerd[1723]: time="2026-04-24T23:57:36.213710348Z" level=info msg="CreateContainer within sandbox \"9f3989fe3e47f63418d6520d44148b16e94710b21d708fdf9e4a9a890f673979\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 24 23:57:36.253727 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4214592203.mount: Deactivated successfully. Apr 24 23:57:36.264844 containerd[1723]: time="2026-04-24T23:57:36.264798356Z" level=info msg="CreateContainer within sandbox \"9f3989fe3e47f63418d6520d44148b16e94710b21d708fdf9e4a9a890f673979\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"937f6c84d2a6c693d47ce968beb252ddf6c4ddfa1074c253bf0ee12d8b48d3d3\"" Apr 24 23:57:36.265567 containerd[1723]: time="2026-04-24T23:57:36.265531266Z" level=info msg="StartContainer for \"937f6c84d2a6c693d47ce968beb252ddf6c4ddfa1074c253bf0ee12d8b48d3d3\"" Apr 24 23:57:36.299989 systemd[1]: run-containerd-runc-k8s.io-937f6c84d2a6c693d47ce968beb252ddf6c4ddfa1074c253bf0ee12d8b48d3d3-runc.NPK8sc.mount: Deactivated successfully. Apr 24 23:57:36.311541 systemd[1]: Started cri-containerd-937f6c84d2a6c693d47ce968beb252ddf6c4ddfa1074c253bf0ee12d8b48d3d3.scope - libcontainer container 937f6c84d2a6c693d47ce968beb252ddf6c4ddfa1074c253bf0ee12d8b48d3d3. Apr 24 23:57:36.341703 containerd[1723]: time="2026-04-24T23:57:36.341658320Z" level=info msg="StartContainer for \"937f6c84d2a6c693d47ce968beb252ddf6c4ddfa1074c253bf0ee12d8b48d3d3\" returns successfully" Apr 24 23:57:37.137679 kubelet[3267]: E0424 23:57:37.137629 3267 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwxs4" podUID="ba5ea344-e24c-488b-ad7b-af64eecfd3fe" Apr 24 23:57:37.991480 containerd[1723]: time="2026-04-24T23:57:37.991424164Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 24 23:57:37.993610 systemd[1]: cri-containerd-937f6c84d2a6c693d47ce968beb252ddf6c4ddfa1074c253bf0ee12d8b48d3d3.scope: Deactivated successfully. Apr 24 23:57:38.019476 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-937f6c84d2a6c693d47ce968beb252ddf6c4ddfa1074c253bf0ee12d8b48d3d3-rootfs.mount: Deactivated successfully. Apr 24 23:57:38.072479 kubelet[3267]: I0424 23:57:38.072090 3267 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 24 23:57:39.283743 kubelet[3267]: I0424 23:57:39.283639 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d55b\" (UniqueName: \"kubernetes.io/projected/d89c2d22-a648-4465-85aa-6b284aea19c0-kube-api-access-5d55b\") pod \"coredns-674b8bbfcf-4cgp8\" (UID: \"d89c2d22-a648-4465-85aa-6b284aea19c0\") " pod="kube-system/coredns-674b8bbfcf-4cgp8" Apr 24 23:57:39.287066 kubelet[3267]: I0424 23:57:39.284116 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d89c2d22-a648-4465-85aa-6b284aea19c0-config-volume\") pod \"coredns-674b8bbfcf-4cgp8\" (UID: \"d89c2d22-a648-4465-85aa-6b284aea19c0\") " pod="kube-system/coredns-674b8bbfcf-4cgp8" Apr 24 23:57:39.287145 containerd[1723]: time="2026-04-24T23:57:39.283804207Z" level=info msg="shim disconnected" id=937f6c84d2a6c693d47ce968beb252ddf6c4ddfa1074c253bf0ee12d8b48d3d3 namespace=k8s.io Apr 24 23:57:39.287145 containerd[1723]: time="2026-04-24T23:57:39.283859007Z" level=warning msg="cleaning up after shim disconnected" id=937f6c84d2a6c693d47ce968beb252ddf6c4ddfa1074c253bf0ee12d8b48d3d3 namespace=k8s.io Apr 24 23:57:39.287145 containerd[1723]: time="2026-04-24T23:57:39.283869607Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:57:39.298515 systemd[1]: Created slice kubepods-burstable-podd89c2d22_a648_4465_85aa_6b284aea19c0.slice - libcontainer container kubepods-burstable-podd89c2d22_a648_4465_85aa_6b284aea19c0.slice. Apr 24 23:57:39.317135 systemd[1]: Created slice kubepods-burstable-pod67de6f2b_7589_40b6_8033_934e9c5ab432.slice - libcontainer container kubepods-burstable-pod67de6f2b_7589_40b6_8033_934e9c5ab432.slice. Apr 24 23:57:39.329274 containerd[1723]: time="2026-04-24T23:57:39.329222023Z" level=warning msg="cleanup warnings time=\"2026-04-24T23:57:39Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 24 23:57:39.331619 systemd[1]: Created slice kubepods-besteffort-podba5ea344_e24c_488b_ad7b_af64eecfd3fe.slice - libcontainer container kubepods-besteffort-podba5ea344_e24c_488b_ad7b_af64eecfd3fe.slice. Apr 24 23:57:39.340728 containerd[1723]: time="2026-04-24T23:57:39.340687279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rwxs4,Uid:ba5ea344-e24c-488b-ad7b-af64eecfd3fe,Namespace:calico-system,Attempt:0,}" Apr 24 23:57:39.341869 systemd[1]: Created slice kubepods-besteffort-pod556622c8_9156_4147_b4ef_3b90cb6f4249.slice - libcontainer container kubepods-besteffort-pod556622c8_9156_4147_b4ef_3b90cb6f4249.slice. Apr 24 23:57:39.352325 systemd[1]: Created slice kubepods-besteffort-podba1252cc_9f02_40b0_83cf_8bd40241ac3c.slice - libcontainer container kubepods-besteffort-podba1252cc_9f02_40b0_83cf_8bd40241ac3c.slice. Apr 24 23:57:39.358737 systemd[1]: Created slice kubepods-besteffort-pod4a31630e_98ec_43f7_b187_040b947d7c6b.slice - libcontainer container kubepods-besteffort-pod4a31630e_98ec_43f7_b187_040b947d7c6b.slice. Apr 24 23:57:39.384992 kubelet[3267]: I0424 23:57:39.384931 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxs44\" (UniqueName: \"kubernetes.io/projected/ba1252cc-9f02-40b0-83cf-8bd40241ac3c-kube-api-access-gxs44\") pod \"whisker-79bcdf8478-559x4\" (UID: \"ba1252cc-9f02-40b0-83cf-8bd40241ac3c\") " pod="calico-system/whisker-79bcdf8478-559x4" Apr 24 23:57:39.384992 kubelet[3267]: I0424 23:57:39.384986 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/105586a9-fa42-4f9c-8b39-19852c899d53-config\") pod \"goldmane-5b85766d88-frn4w\" (UID: \"105586a9-fa42-4f9c-8b39-19852c899d53\") " pod="calico-system/goldmane-5b85766d88-frn4w" Apr 24 23:57:39.385181 kubelet[3267]: I0424 23:57:39.385008 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/105586a9-fa42-4f9c-8b39-19852c899d53-goldmane-key-pair\") pod \"goldmane-5b85766d88-frn4w\" (UID: \"105586a9-fa42-4f9c-8b39-19852c899d53\") " pod="calico-system/goldmane-5b85766d88-frn4w" Apr 24 23:57:39.385181 kubelet[3267]: I0424 23:57:39.385067 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba1252cc-9f02-40b0-83cf-8bd40241ac3c-whisker-ca-bundle\") pod \"whisker-79bcdf8478-559x4\" (UID: \"ba1252cc-9f02-40b0-83cf-8bd40241ac3c\") " pod="calico-system/whisker-79bcdf8478-559x4" Apr 24 23:57:39.385181 kubelet[3267]: I0424 23:57:39.385095 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/105586a9-fa42-4f9c-8b39-19852c899d53-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-frn4w\" (UID: \"105586a9-fa42-4f9c-8b39-19852c899d53\") " pod="calico-system/goldmane-5b85766d88-frn4w" Apr 24 23:57:39.385181 kubelet[3267]: I0424 23:57:39.385119 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/556622c8-9156-4147-b4ef-3b90cb6f4249-tigera-ca-bundle\") pod \"calico-kube-controllers-7fd8994f4c-c6q9c\" (UID: \"556622c8-9156-4147-b4ef-3b90cb6f4249\") " pod="calico-system/calico-kube-controllers-7fd8994f4c-c6q9c" Apr 24 23:57:39.385181 kubelet[3267]: I0424 23:57:39.385152 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msc5w\" (UniqueName: \"kubernetes.io/projected/4f4f44d0-9a79-44e1-a0dd-24732d17ab45-kube-api-access-msc5w\") pod \"calico-apiserver-7876b86597-j8278\" (UID: \"4f4f44d0-9a79-44e1-a0dd-24732d17ab45\") " pod="calico-system/calico-apiserver-7876b86597-j8278" Apr 24 23:57:39.385421 kubelet[3267]: I0424 23:57:39.385194 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4spj\" (UniqueName: \"kubernetes.io/projected/67de6f2b-7589-40b6-8033-934e9c5ab432-kube-api-access-c4spj\") pod \"coredns-674b8bbfcf-4hsvs\" (UID: \"67de6f2b-7589-40b6-8033-934e9c5ab432\") " pod="kube-system/coredns-674b8bbfcf-4hsvs" Apr 24 23:57:39.385421 kubelet[3267]: I0424 23:57:39.385218 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm4xn\" (UniqueName: \"kubernetes.io/projected/105586a9-fa42-4f9c-8b39-19852c899d53-kube-api-access-gm4xn\") pod \"goldmane-5b85766d88-frn4w\" (UID: \"105586a9-fa42-4f9c-8b39-19852c899d53\") " pod="calico-system/goldmane-5b85766d88-frn4w" Apr 24 23:57:39.385421 kubelet[3267]: I0424 23:57:39.385240 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbgz8\" (UniqueName: \"kubernetes.io/projected/556622c8-9156-4147-b4ef-3b90cb6f4249-kube-api-access-cbgz8\") pod \"calico-kube-controllers-7fd8994f4c-c6q9c\" (UID: \"556622c8-9156-4147-b4ef-3b90cb6f4249\") " pod="calico-system/calico-kube-controllers-7fd8994f4c-c6q9c" Apr 24 23:57:39.385421 kubelet[3267]: I0424 23:57:39.385282 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4a31630e-98ec-43f7-b187-040b947d7c6b-calico-apiserver-certs\") pod \"calico-apiserver-7876b86597-fghhg\" (UID: \"4a31630e-98ec-43f7-b187-040b947d7c6b\") " pod="calico-system/calico-apiserver-7876b86597-fghhg" Apr 24 23:57:39.385421 kubelet[3267]: I0424 23:57:39.385310 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz4dr\" (UniqueName: \"kubernetes.io/projected/4a31630e-98ec-43f7-b187-040b947d7c6b-kube-api-access-zz4dr\") pod \"calico-apiserver-7876b86597-fghhg\" (UID: \"4a31630e-98ec-43f7-b187-040b947d7c6b\") " pod="calico-system/calico-apiserver-7876b86597-fghhg" Apr 24 23:57:39.386959 kubelet[3267]: I0424 23:57:39.385335 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67de6f2b-7589-40b6-8033-934e9c5ab432-config-volume\") pod \"coredns-674b8bbfcf-4hsvs\" (UID: \"67de6f2b-7589-40b6-8033-934e9c5ab432\") " pod="kube-system/coredns-674b8bbfcf-4hsvs" Apr 24 23:57:39.387027 kubelet[3267]: I0424 23:57:39.386987 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/ba1252cc-9f02-40b0-83cf-8bd40241ac3c-nginx-config\") pod \"whisker-79bcdf8478-559x4\" (UID: \"ba1252cc-9f02-40b0-83cf-8bd40241ac3c\") " pod="calico-system/whisker-79bcdf8478-559x4" Apr 24 23:57:39.387027 kubelet[3267]: I0424 23:57:39.387017 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ba1252cc-9f02-40b0-83cf-8bd40241ac3c-whisker-backend-key-pair\") pod \"whisker-79bcdf8478-559x4\" (UID: \"ba1252cc-9f02-40b0-83cf-8bd40241ac3c\") " pod="calico-system/whisker-79bcdf8478-559x4" Apr 24 23:57:39.387111 kubelet[3267]: I0424 23:57:39.387058 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4f4f44d0-9a79-44e1-a0dd-24732d17ab45-calico-apiserver-certs\") pod \"calico-apiserver-7876b86597-j8278\" (UID: \"4f4f44d0-9a79-44e1-a0dd-24732d17ab45\") " pod="calico-system/calico-apiserver-7876b86597-j8278" Apr 24 23:57:39.390118 systemd[1]: Created slice kubepods-besteffort-pod105586a9_fa42_4f9c_8b39_19852c899d53.slice - libcontainer container kubepods-besteffort-pod105586a9_fa42_4f9c_8b39_19852c899d53.slice. Apr 24 23:57:39.403263 systemd[1]: Created slice kubepods-besteffort-pod4f4f44d0_9a79_44e1_a0dd_24732d17ab45.slice - libcontainer container kubepods-besteffort-pod4f4f44d0_9a79_44e1_a0dd_24732d17ab45.slice. Apr 24 23:57:39.469589 containerd[1723]: time="2026-04-24T23:57:39.469533628Z" level=error msg="Failed to destroy network for sandbox \"dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:39.471927 containerd[1723]: time="2026-04-24T23:57:39.471749358Z" level=error msg="encountered an error cleaning up failed sandbox \"dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:39.471927 containerd[1723]: time="2026-04-24T23:57:39.471844759Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rwxs4,Uid:ba5ea344-e24c-488b-ad7b-af64eecfd3fe,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:39.472648 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42-shm.mount: Deactivated successfully. Apr 24 23:57:39.475225 kubelet[3267]: E0424 23:57:39.473271 3267 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:39.475225 kubelet[3267]: E0424 23:57:39.473369 3267 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rwxs4" Apr 24 23:57:39.475225 kubelet[3267]: E0424 23:57:39.473403 3267 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rwxs4" Apr 24 23:57:39.475410 kubelet[3267]: E0424 23:57:39.473468 3267 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rwxs4_calico-system(ba5ea344-e24c-488b-ad7b-af64eecfd3fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rwxs4_calico-system(ba5ea344-e24c-488b-ad7b-af64eecfd3fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rwxs4" podUID="ba5ea344-e24c-488b-ad7b-af64eecfd3fe" Apr 24 23:57:39.615825 containerd[1723]: time="2026-04-24T23:57:39.615621111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4cgp8,Uid:d89c2d22-a648-4465-85aa-6b284aea19c0,Namespace:kube-system,Attempt:0,}" Apr 24 23:57:39.627084 containerd[1723]: time="2026-04-24T23:57:39.627047566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4hsvs,Uid:67de6f2b-7589-40b6-8033-934e9c5ab432,Namespace:kube-system,Attempt:0,}" Apr 24 23:57:39.646988 containerd[1723]: time="2026-04-24T23:57:39.646948936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fd8994f4c-c6q9c,Uid:556622c8-9156-4147-b4ef-3b90cb6f4249,Namespace:calico-system,Attempt:0,}" Apr 24 23:57:39.657488 containerd[1723]: time="2026-04-24T23:57:39.657456978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79bcdf8478-559x4,Uid:ba1252cc-9f02-40b0-83cf-8bd40241ac3c,Namespace:calico-system,Attempt:0,}" Apr 24 23:57:39.671037 containerd[1723]: time="2026-04-24T23:57:39.670978462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7876b86597-fghhg,Uid:4a31630e-98ec-43f7-b187-040b947d7c6b,Namespace:calico-system,Attempt:0,}" Apr 24 23:57:39.702013 containerd[1723]: time="2026-04-24T23:57:39.701905682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-frn4w,Uid:105586a9-fa42-4f9c-8b39-19852c899d53,Namespace:calico-system,Attempt:0,}" Apr 24 23:57:39.720988 containerd[1723]: time="2026-04-24T23:57:39.720839539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7876b86597-j8278,Uid:4f4f44d0-9a79-44e1-a0dd-24732d17ab45,Namespace:calico-system,Attempt:0,}" Apr 24 23:57:39.733512 containerd[1723]: time="2026-04-24T23:57:39.733443710Z" level=error msg="Failed to destroy network for sandbox \"ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:39.734528 containerd[1723]: time="2026-04-24T23:57:39.734477724Z" level=error msg="encountered an error cleaning up failed sandbox \"ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:39.734649 containerd[1723]: time="2026-04-24T23:57:39.734569725Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4cgp8,Uid:d89c2d22-a648-4465-85aa-6b284aea19c0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:39.734966 kubelet[3267]: E0424 23:57:39.734912 3267 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:39.735066 kubelet[3267]: E0424 23:57:39.735000 3267 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-4cgp8" Apr 24 23:57:39.735066 kubelet[3267]: E0424 23:57:39.735032 3267 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-4cgp8" Apr 24 23:57:39.735163 kubelet[3267]: E0424 23:57:39.735102 3267 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-4cgp8_kube-system(d89c2d22-a648-4465-85aa-6b284aea19c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-4cgp8_kube-system(d89c2d22-a648-4465-85aa-6b284aea19c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-4cgp8" podUID="d89c2d22-a648-4465-85aa-6b284aea19c0" Apr 24 23:57:39.783441 containerd[1723]: time="2026-04-24T23:57:39.783368188Z" level=error msg="Failed to destroy network for sandbox \"51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:39.785235 containerd[1723]: time="2026-04-24T23:57:39.785174112Z" level=error msg="encountered an error cleaning up failed sandbox \"51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:39.785396 containerd[1723]: time="2026-04-24T23:57:39.785310814Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4hsvs,Uid:67de6f2b-7589-40b6-8033-934e9c5ab432,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:39.787195 kubelet[3267]: E0424 23:57:39.787139 3267 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:39.787324 kubelet[3267]: E0424 23:57:39.787226 3267 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-4hsvs" Apr 24 23:57:39.787324 kubelet[3267]: E0424 23:57:39.787257 3267 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-4hsvs" Apr 24 23:57:39.787809 kubelet[3267]: E0424 23:57:39.787329 3267 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-4hsvs_kube-system(67de6f2b-7589-40b6-8033-934e9c5ab432)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-4hsvs_kube-system(67de6f2b-7589-40b6-8033-934e9c5ab432)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-4hsvs" podUID="67de6f2b-7589-40b6-8033-934e9c5ab432" Apr 24 23:57:39.933499 containerd[1723]: time="2026-04-24T23:57:39.933324623Z" level=error msg="Failed to destroy network for sandbox \"75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:39.936634 containerd[1723]: time="2026-04-24T23:57:39.936587667Z" level=error msg="encountered an error cleaning up failed sandbox \"75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:39.936846 containerd[1723]: time="2026-04-24T23:57:39.936808370Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fd8994f4c-c6q9c,Uid:556622c8-9156-4147-b4ef-3b90cb6f4249,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:39.937597 kubelet[3267]: E0424 23:57:39.937348 3267 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:39.938353 kubelet[3267]: E0424 23:57:39.938212 3267 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7fd8994f4c-c6q9c" Apr 24 23:57:39.939434 kubelet[3267]: E0424 23:57:39.938465 3267 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7fd8994f4c-c6q9c" Apr 24 23:57:39.939434 kubelet[3267]: E0424 23:57:39.938582 3267 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7fd8994f4c-c6q9c_calico-system(556622c8-9156-4147-b4ef-3b90cb6f4249)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7fd8994f4c-c6q9c_calico-system(556622c8-9156-4147-b4ef-3b90cb6f4249)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7fd8994f4c-c6q9c" podUID="556622c8-9156-4147-b4ef-3b90cb6f4249" Apr 24 23:57:39.957363 containerd[1723]: time="2026-04-24T23:57:39.955726327Z" level=error msg="Failed to destroy network for sandbox \"ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:39.957837 containerd[1723]: time="2026-04-24T23:57:39.957799655Z" level=error msg="encountered an error cleaning up failed sandbox \"ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:39.957996 containerd[1723]: time="2026-04-24T23:57:39.957967758Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79bcdf8478-559x4,Uid:ba1252cc-9f02-40b0-83cf-8bd40241ac3c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:39.958369 kubelet[3267]: E0424 23:57:39.958310 3267 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:39.958569 kubelet[3267]: E0424 23:57:39.958530 3267 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-79bcdf8478-559x4" Apr 24 23:57:39.958696 kubelet[3267]: E0424 23:57:39.958676 3267 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-79bcdf8478-559x4" Apr 24 23:57:39.958979 kubelet[3267]: E0424 23:57:39.958939 3267 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-79bcdf8478-559x4_calico-system(ba1252cc-9f02-40b0-83cf-8bd40241ac3c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-79bcdf8478-559x4_calico-system(ba1252cc-9f02-40b0-83cf-8bd40241ac3c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-79bcdf8478-559x4" podUID="ba1252cc-9f02-40b0-83cf-8bd40241ac3c" Apr 24 23:57:39.977561 containerd[1723]: time="2026-04-24T23:57:39.977504923Z" level=error msg="Failed to destroy network for sandbox \"169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:39.978102 containerd[1723]: time="2026-04-24T23:57:39.978065830Z" level=error msg="encountered an error cleaning up failed sandbox \"169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:39.978282 containerd[1723]: time="2026-04-24T23:57:39.978255533Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7876b86597-j8278,Uid:4f4f44d0-9a79-44e1-a0dd-24732d17ab45,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:39.978679 kubelet[3267]: E0424 23:57:39.978640 3267 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:39.980052 kubelet[3267]: E0424 23:57:39.978825 3267 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7876b86597-j8278" Apr 24 23:57:39.980052 kubelet[3267]: E0424 23:57:39.978875 3267 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7876b86597-j8278" Apr 24 23:57:39.980052 kubelet[3267]: E0424 23:57:39.978982 3267 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7876b86597-j8278_calico-system(4f4f44d0-9a79-44e1-a0dd-24732d17ab45)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7876b86597-j8278_calico-system(4f4f44d0-9a79-44e1-a0dd-24732d17ab45)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7876b86597-j8278" podUID="4f4f44d0-9a79-44e1-a0dd-24732d17ab45" Apr 24 23:57:39.987729 containerd[1723]: time="2026-04-24T23:57:39.987672661Z" level=error msg="Failed to destroy network for sandbox \"31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:39.988279 containerd[1723]: time="2026-04-24T23:57:39.988231768Z" level=error msg="encountered an error cleaning up failed sandbox \"31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:39.988667 containerd[1723]: time="2026-04-24T23:57:39.988629274Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-frn4w,Uid:105586a9-fa42-4f9c-8b39-19852c899d53,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:39.990764 kubelet[3267]: E0424 23:57:39.989020 3267 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:39.990764 kubelet[3267]: E0424 23:57:39.989077 3267 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-frn4w" Apr 24 23:57:39.990764 kubelet[3267]: E0424 23:57:39.989109 3267 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-frn4w" Apr 24 23:57:39.990991 kubelet[3267]: E0424 23:57:39.989169 3267 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-frn4w_calico-system(105586a9-fa42-4f9c-8b39-19852c899d53)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-frn4w_calico-system(105586a9-fa42-4f9c-8b39-19852c899d53)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-frn4w" podUID="105586a9-fa42-4f9c-8b39-19852c899d53" Apr 24 23:57:39.992359 containerd[1723]: time="2026-04-24T23:57:39.992319424Z" level=error msg="Failed to destroy network for sandbox \"a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:39.992826 containerd[1723]: time="2026-04-24T23:57:39.992784330Z" level=error msg="encountered an error cleaning up failed sandbox \"a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:39.993373 containerd[1723]: time="2026-04-24T23:57:39.993182436Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7876b86597-fghhg,Uid:4a31630e-98ec-43f7-b187-040b947d7c6b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:39.993787 kubelet[3267]: E0424 23:57:39.993763 3267 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:39.993863 kubelet[3267]: E0424 23:57:39.993812 3267 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7876b86597-fghhg" Apr 24 23:57:39.993863 kubelet[3267]: E0424 23:57:39.993851 3267 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7876b86597-fghhg" Apr 24 23:57:39.993960 kubelet[3267]: E0424 23:57:39.993930 3267 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7876b86597-fghhg_calico-system(4a31630e-98ec-43f7-b187-040b947d7c6b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7876b86597-fghhg_calico-system(4a31630e-98ec-43f7-b187-040b947d7c6b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7876b86597-fghhg" podUID="4a31630e-98ec-43f7-b187-040b947d7c6b" Apr 24 23:57:40.327218 kubelet[3267]: I0424 23:57:40.327073 3267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51" Apr 24 23:57:40.329962 containerd[1723]: time="2026-04-24T23:57:40.329874406Z" level=info msg="StopPodSandbox for \"ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51\"" Apr 24 23:57:40.330874 containerd[1723]: time="2026-04-24T23:57:40.330572915Z" level=info msg="Ensure that sandbox ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51 in task-service has been cleanup successfully" Apr 24 23:57:40.331598 kubelet[3267]: I0424 23:57:40.331570 3267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75" Apr 24 23:57:40.332235 containerd[1723]: time="2026-04-24T23:57:40.332096236Z" level=info msg="StopPodSandbox for \"75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75\"" Apr 24 23:57:40.333573 containerd[1723]: time="2026-04-24T23:57:40.333545956Z" level=info msg="Ensure that sandbox 75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75 in task-service has been cleanup successfully" Apr 24 23:57:40.334775 kubelet[3267]: I0424 23:57:40.334577 3267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3" Apr 24 23:57:40.338273 containerd[1723]: time="2026-04-24T23:57:40.336069890Z" level=info msg="StopPodSandbox for \"ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3\"" Apr 24 23:57:40.338273 containerd[1723]: time="2026-04-24T23:57:40.336299793Z" level=info msg="Ensure that sandbox ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3 in task-service has been cleanup successfully" Apr 24 23:57:40.349291 kubelet[3267]: I0424 23:57:40.349266 3267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de" Apr 24 23:57:40.351754 containerd[1723]: time="2026-04-24T23:57:40.351719102Z" level=info msg="StopPodSandbox for \"169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de\"" Apr 24 23:57:40.352031 containerd[1723]: time="2026-04-24T23:57:40.351923805Z" level=info msg="Ensure that sandbox 169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de in task-service has been cleanup successfully" Apr 24 23:57:40.368900 kubelet[3267]: I0424 23:57:40.368130 3267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85" Apr 24 23:57:40.370645 containerd[1723]: time="2026-04-24T23:57:40.370611959Z" level=info msg="StopPodSandbox for \"31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85\"" Apr 24 23:57:40.372745 containerd[1723]: time="2026-04-24T23:57:40.370976764Z" level=info msg="Ensure that sandbox 31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85 in task-service has been cleanup successfully" Apr 24 23:57:40.376591 containerd[1723]: time="2026-04-24T23:57:40.376562139Z" level=info msg="CreateContainer within sandbox \"9f3989fe3e47f63418d6520d44148b16e94710b21d708fdf9e4a9a890f673979\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 24 23:57:40.379117 kubelet[3267]: I0424 23:57:40.378986 3267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324" Apr 24 23:57:40.381702 containerd[1723]: time="2026-04-24T23:57:40.381675709Z" level=info msg="StopPodSandbox for \"a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324\"" Apr 24 23:57:40.382129 containerd[1723]: time="2026-04-24T23:57:40.382071614Z" level=info msg="Ensure that sandbox a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324 in task-service has been cleanup successfully" Apr 24 23:57:40.386311 kubelet[3267]: I0424 23:57:40.385445 3267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42" Apr 24 23:57:40.403534 containerd[1723]: time="2026-04-24T23:57:40.402631593Z" level=info msg="StopPodSandbox for \"dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42\"" Apr 24 23:57:40.403534 containerd[1723]: time="2026-04-24T23:57:40.402826196Z" level=info msg="Ensure that sandbox dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42 in task-service has been cleanup successfully" Apr 24 23:57:40.420891 kubelet[3267]: I0424 23:57:40.420862 3267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4" Apr 24 23:57:40.423191 containerd[1723]: time="2026-04-24T23:57:40.422830067Z" level=info msg="StopPodSandbox for \"51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4\"" Apr 24 23:57:40.424878 containerd[1723]: time="2026-04-24T23:57:40.424849195Z" level=info msg="Ensure that sandbox 51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4 in task-service has been cleanup successfully" Apr 24 23:57:40.489056 containerd[1723]: time="2026-04-24T23:57:40.489000266Z" level=info msg="CreateContainer within sandbox \"9f3989fe3e47f63418d6520d44148b16e94710b21d708fdf9e4a9a890f673979\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8e1af0ebc58dd4d640ff6e004a50f78a354ed5b604cfc9971aa1ace2e3471d1e\"" Apr 24 23:57:40.491855 containerd[1723]: time="2026-04-24T23:57:40.491817604Z" level=info msg="StartContainer for \"8e1af0ebc58dd4d640ff6e004a50f78a354ed5b604cfc9971aa1ace2e3471d1e\"" Apr 24 23:57:40.520286 containerd[1723]: time="2026-04-24T23:57:40.520223189Z" level=error msg="StopPodSandbox for \"ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3\" failed" error="failed to destroy network for sandbox \"ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:40.520718 containerd[1723]: time="2026-04-24T23:57:40.520683496Z" level=error msg="StopPodSandbox for \"ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51\" failed" error="failed to destroy network for sandbox \"ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:40.521656 kubelet[3267]: E0424 23:57:40.521611 3267 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51" Apr 24 23:57:40.522470 kubelet[3267]: E0424 23:57:40.521611 3267 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3" Apr 24 23:57:40.522853 kubelet[3267]: E0424 23:57:40.522594 3267 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3"} Apr 24 23:57:40.522853 kubelet[3267]: E0424 23:57:40.522688 3267 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d89c2d22-a648-4465-85aa-6b284aea19c0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 24 23:57:40.522853 kubelet[3267]: E0424 23:57:40.522722 3267 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d89c2d22-a648-4465-85aa-6b284aea19c0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-4cgp8" podUID="d89c2d22-a648-4465-85aa-6b284aea19c0" Apr 24 23:57:40.522853 kubelet[3267]: E0424 23:57:40.522776 3267 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51"} Apr 24 23:57:40.522853 kubelet[3267]: E0424 23:57:40.522809 3267 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ba1252cc-9f02-40b0-83cf-8bd40241ac3c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 24 23:57:40.523193 kubelet[3267]: E0424 23:57:40.522832 3267 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ba1252cc-9f02-40b0-83cf-8bd40241ac3c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-79bcdf8478-559x4" podUID="ba1252cc-9f02-40b0-83cf-8bd40241ac3c" Apr 24 23:57:40.563084 containerd[1723]: time="2026-04-24T23:57:40.563015470Z" level=error msg="StopPodSandbox for \"169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de\" failed" error="failed to destroy network for sandbox \"169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:40.563685 kubelet[3267]: E0424 23:57:40.563580 3267 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de" Apr 24 23:57:40.563880 kubelet[3267]: E0424 23:57:40.563855 3267 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de"} Apr 24 23:57:40.564006 kubelet[3267]: E0424 23:57:40.563987 3267 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4f4f44d0-9a79-44e1-a0dd-24732d17ab45\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 24 23:57:40.564201 kubelet[3267]: E0424 23:57:40.564168 3267 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4f4f44d0-9a79-44e1-a0dd-24732d17ab45\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7876b86597-j8278" podUID="4f4f44d0-9a79-44e1-a0dd-24732d17ab45" Apr 24 23:57:40.576168 containerd[1723]: time="2026-04-24T23:57:40.576111148Z" level=error msg="StopPodSandbox for \"75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75\" failed" error="failed to destroy network for sandbox \"75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:40.576567 kubelet[3267]: E0424 23:57:40.576528 3267 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75" Apr 24 23:57:40.576650 kubelet[3267]: E0424 23:57:40.576583 3267 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75"} Apr 24 23:57:40.576650 kubelet[3267]: E0424 23:57:40.576624 3267 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"556622c8-9156-4147-b4ef-3b90cb6f4249\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 24 23:57:40.576796 kubelet[3267]: E0424 23:57:40.576654 3267 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"556622c8-9156-4147-b4ef-3b90cb6f4249\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7fd8994f4c-c6q9c" podUID="556622c8-9156-4147-b4ef-3b90cb6f4249" Apr 24 23:57:40.589909 systemd[1]: Started cri-containerd-8e1af0ebc58dd4d640ff6e004a50f78a354ed5b604cfc9971aa1ace2e3471d1e.scope - libcontainer container 8e1af0ebc58dd4d640ff6e004a50f78a354ed5b604cfc9971aa1ace2e3471d1e. Apr 24 23:57:40.616641 containerd[1723]: time="2026-04-24T23:57:40.616575997Z" level=error msg="StopPodSandbox for \"dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42\" failed" error="failed to destroy network for sandbox \"dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:40.617237 kubelet[3267]: E0424 23:57:40.617188 3267 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42" Apr 24 23:57:40.617415 kubelet[3267]: E0424 23:57:40.617261 3267 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42"} Apr 24 23:57:40.617415 kubelet[3267]: E0424 23:57:40.617303 3267 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ba5ea344-e24c-488b-ad7b-af64eecfd3fe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 24 23:57:40.617561 kubelet[3267]: E0424 23:57:40.617424 3267 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ba5ea344-e24c-488b-ad7b-af64eecfd3fe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rwxs4" podUID="ba5ea344-e24c-488b-ad7b-af64eecfd3fe" Apr 24 23:57:40.619596 containerd[1723]: time="2026-04-24T23:57:40.619546138Z" level=error msg="StopPodSandbox for \"a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324\" failed" error="failed to destroy network for sandbox \"a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:40.620023 kubelet[3267]: E0424 23:57:40.619951 3267 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324" Apr 24 23:57:40.620765 kubelet[3267]: E0424 23:57:40.620033 3267 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324"} Apr 24 23:57:40.620765 kubelet[3267]: E0424 23:57:40.620183 3267 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4a31630e-98ec-43f7-b187-040b947d7c6b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 24 23:57:40.620765 kubelet[3267]: E0424 23:57:40.620233 3267 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4a31630e-98ec-43f7-b187-040b947d7c6b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7876b86597-fghhg" podUID="4a31630e-98ec-43f7-b187-040b947d7c6b" Apr 24 23:57:40.625722 containerd[1723]: time="2026-04-24T23:57:40.625528419Z" level=error msg="StopPodSandbox for \"51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4\" failed" error="failed to destroy network for sandbox \"51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:40.625909 kubelet[3267]: E0424 23:57:40.625825 3267 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4" Apr 24 23:57:40.625909 kubelet[3267]: E0424 23:57:40.625888 3267 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4"} Apr 24 23:57:40.626720 kubelet[3267]: E0424 23:57:40.625922 3267 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"67de6f2b-7589-40b6-8033-934e9c5ab432\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 24 23:57:40.626720 kubelet[3267]: E0424 23:57:40.626070 3267 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"67de6f2b-7589-40b6-8033-934e9c5ab432\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-4hsvs" podUID="67de6f2b-7589-40b6-8033-934e9c5ab432" Apr 24 23:57:40.627008 containerd[1723]: time="2026-04-24T23:57:40.626890337Z" level=error msg="StopPodSandbox for \"31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85\" failed" error="failed to destroy network for sandbox \"31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:40.627139 kubelet[3267]: E0424 23:57:40.627097 3267 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85" Apr 24 23:57:40.627211 kubelet[3267]: E0424 23:57:40.627150 3267 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85"} Apr 24 23:57:40.627211 kubelet[3267]: E0424 23:57:40.627185 3267 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"105586a9-fa42-4f9c-8b39-19852c899d53\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 24 23:57:40.627324 kubelet[3267]: E0424 23:57:40.627214 3267 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"105586a9-fa42-4f9c-8b39-19852c899d53\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-frn4w" podUID="105586a9-fa42-4f9c-8b39-19852c899d53" Apr 24 23:57:40.651743 containerd[1723]: time="2026-04-24T23:57:40.651696174Z" level=info msg="StartContainer for \"8e1af0ebc58dd4d640ff6e004a50f78a354ed5b604cfc9971aa1ace2e3471d1e\" returns successfully" Apr 24 23:57:41.427440 containerd[1723]: time="2026-04-24T23:57:41.426997998Z" level=info msg="StopPodSandbox for \"ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51\"" Apr 24 23:57:41.478816 systemd[1]: run-containerd-runc-k8s.io-8e1af0ebc58dd4d640ff6e004a50f78a354ed5b604cfc9971aa1ace2e3471d1e-runc.qxxYVB.mount: Deactivated successfully. Apr 24 23:57:41.519150 kubelet[3267]: I0424 23:57:41.519003 3267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jtdxm" podStartSLOduration=6.260351744 podStartE2EDuration="36.518937746s" podCreationTimestamp="2026-04-24 23:57:05 +0000 UTC" firstStartedPulling="2026-04-24 23:57:05.948585556 +0000 UTC m=+21.910013079" lastFinishedPulling="2026-04-24 23:57:36.207171558 +0000 UTC m=+52.168599081" observedRunningTime="2026-04-24 23:57:41.46322689 +0000 UTC m=+57.424654413" watchObservedRunningTime="2026-04-24 23:57:41.518937746 +0000 UTC m=+57.480365269" Apr 24 23:57:41.569574 containerd[1723]: 2026-04-24 23:57:41.517 [INFO][4520] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51" Apr 24 23:57:41.569574 containerd[1723]: 2026-04-24 23:57:41.518 [INFO][4520] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51" iface="eth0" netns="/var/run/netns/cni-a635211a-cd55-a50d-b297-c2e3eb4a1b0b" Apr 24 23:57:41.569574 containerd[1723]: 2026-04-24 23:57:41.518 [INFO][4520] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51" iface="eth0" netns="/var/run/netns/cni-a635211a-cd55-a50d-b297-c2e3eb4a1b0b" Apr 24 23:57:41.569574 containerd[1723]: 2026-04-24 23:57:41.518 [INFO][4520] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51" iface="eth0" netns="/var/run/netns/cni-a635211a-cd55-a50d-b297-c2e3eb4a1b0b" Apr 24 23:57:41.569574 containerd[1723]: 2026-04-24 23:57:41.518 [INFO][4520] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51" Apr 24 23:57:41.569574 containerd[1723]: 2026-04-24 23:57:41.519 [INFO][4520] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51" Apr 24 23:57:41.569574 containerd[1723]: 2026-04-24 23:57:41.555 [INFO][4548] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51" HandleID="k8s-pod-network.ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-whisker--79bcdf8478--559x4-eth0" Apr 24 23:57:41.569574 containerd[1723]: 2026-04-24 23:57:41.555 [INFO][4548] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:57:41.569574 containerd[1723]: 2026-04-24 23:57:41.555 [INFO][4548] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:57:41.569574 containerd[1723]: 2026-04-24 23:57:41.562 [WARNING][4548] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51" HandleID="k8s-pod-network.ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-whisker--79bcdf8478--559x4-eth0" Apr 24 23:57:41.569574 containerd[1723]: 2026-04-24 23:57:41.562 [INFO][4548] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51" HandleID="k8s-pod-network.ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-whisker--79bcdf8478--559x4-eth0" Apr 24 23:57:41.569574 containerd[1723]: 2026-04-24 23:57:41.564 [INFO][4548] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:57:41.569574 containerd[1723]: 2026-04-24 23:57:41.566 [INFO][4520] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51" Apr 24 23:57:41.569574 containerd[1723]: time="2026-04-24T23:57:41.569463732Z" level=info msg="TearDown network for sandbox \"ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51\" successfully" Apr 24 23:57:41.569574 containerd[1723]: time="2026-04-24T23:57:41.569500632Z" level=info msg="StopPodSandbox for \"ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51\" returns successfully" Apr 24 23:57:41.573633 systemd[1]: run-netns-cni\x2da635211a\x2dcd55\x2da50d\x2db297\x2dc2e3eb4a1b0b.mount: Deactivated successfully. Apr 24 23:57:41.605349 kubelet[3267]: I0424 23:57:41.605292 3267 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/ba1252cc-9f02-40b0-83cf-8bd40241ac3c-nginx-config\") pod \"ba1252cc-9f02-40b0-83cf-8bd40241ac3c\" (UID: \"ba1252cc-9f02-40b0-83cf-8bd40241ac3c\") " Apr 24 23:57:41.605511 kubelet[3267]: I0424 23:57:41.605357 3267 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxs44\" (UniqueName: \"kubernetes.io/projected/ba1252cc-9f02-40b0-83cf-8bd40241ac3c-kube-api-access-gxs44\") pod \"ba1252cc-9f02-40b0-83cf-8bd40241ac3c\" (UID: \"ba1252cc-9f02-40b0-83cf-8bd40241ac3c\") " Apr 24 23:57:41.605511 kubelet[3267]: I0424 23:57:41.605407 3267 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ba1252cc-9f02-40b0-83cf-8bd40241ac3c-whisker-backend-key-pair\") pod \"ba1252cc-9f02-40b0-83cf-8bd40241ac3c\" (UID: \"ba1252cc-9f02-40b0-83cf-8bd40241ac3c\") " Apr 24 23:57:41.605511 kubelet[3267]: I0424 23:57:41.605440 3267 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba1252cc-9f02-40b0-83cf-8bd40241ac3c-whisker-ca-bundle\") pod \"ba1252cc-9f02-40b0-83cf-8bd40241ac3c\" (UID: \"ba1252cc-9f02-40b0-83cf-8bd40241ac3c\") " Apr 24 23:57:41.606908 kubelet[3267]: I0424 23:57:41.605967 3267 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba1252cc-9f02-40b0-83cf-8bd40241ac3c-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "ba1252cc-9f02-40b0-83cf-8bd40241ac3c" (UID: "ba1252cc-9f02-40b0-83cf-8bd40241ac3c"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 23:57:41.607969 kubelet[3267]: I0424 23:57:41.607838 3267 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba1252cc-9f02-40b0-83cf-8bd40241ac3c-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "ba1252cc-9f02-40b0-83cf-8bd40241ac3c" (UID: "ba1252cc-9f02-40b0-83cf-8bd40241ac3c"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 23:57:41.613299 kubelet[3267]: I0424 23:57:41.611952 3267 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba1252cc-9f02-40b0-83cf-8bd40241ac3c-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "ba1252cc-9f02-40b0-83cf-8bd40241ac3c" (UID: "ba1252cc-9f02-40b0-83cf-8bd40241ac3c"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 23:57:41.613573 kubelet[3267]: I0424 23:57:41.613550 3267 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba1252cc-9f02-40b0-83cf-8bd40241ac3c-kube-api-access-gxs44" (OuterVolumeSpecName: "kube-api-access-gxs44") pod "ba1252cc-9f02-40b0-83cf-8bd40241ac3c" (UID: "ba1252cc-9f02-40b0-83cf-8bd40241ac3c"). InnerVolumeSpecName "kube-api-access-gxs44". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 24 23:57:41.613568 systemd[1]: var-lib-kubelet-pods-ba1252cc\x2d9f02\x2d40b0\x2d83cf\x2d8bd40241ac3c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgxs44.mount: Deactivated successfully. Apr 24 23:57:41.613700 systemd[1]: var-lib-kubelet-pods-ba1252cc\x2d9f02\x2d40b0\x2d83cf\x2d8bd40241ac3c-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 24 23:57:41.706714 kubelet[3267]: I0424 23:57:41.706563 3267 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gxs44\" (UniqueName: \"kubernetes.io/projected/ba1252cc-9f02-40b0-83cf-8bd40241ac3c-kube-api-access-gxs44\") on node \"ci-4081.3.6-n-b07cc1dc35\" DevicePath \"\"" Apr 24 23:57:41.706714 kubelet[3267]: I0424 23:57:41.706603 3267 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ba1252cc-9f02-40b0-83cf-8bd40241ac3c-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-b07cc1dc35\" DevicePath \"\"" Apr 24 23:57:41.706714 kubelet[3267]: I0424 23:57:41.706616 3267 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba1252cc-9f02-40b0-83cf-8bd40241ac3c-whisker-ca-bundle\") on node \"ci-4081.3.6-n-b07cc1dc35\" DevicePath \"\"" Apr 24 23:57:41.706714 kubelet[3267]: I0424 23:57:41.706631 3267 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/ba1252cc-9f02-40b0-83cf-8bd40241ac3c-nginx-config\") on node \"ci-4081.3.6-n-b07cc1dc35\" DevicePath \"\"" Apr 24 23:57:42.151446 systemd[1]: Removed slice kubepods-besteffort-podba1252cc_9f02_40b0_83cf_8bd40241ac3c.slice - libcontainer container kubepods-besteffort-podba1252cc_9f02_40b0_83cf_8bd40241ac3c.slice. Apr 24 23:57:42.373380 kernel: calico-node[4583]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 24 23:57:42.574597 systemd[1]: Created slice kubepods-besteffort-pod3ccd1098_dcbf_4328_a4e8_54b9ea6e7ec9.slice - libcontainer container kubepods-besteffort-pod3ccd1098_dcbf_4328_a4e8_54b9ea6e7ec9.slice. Apr 24 23:57:42.617713 kubelet[3267]: I0424 23:57:42.613438 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3ccd1098-dcbf-4328-a4e8-54b9ea6e7ec9-whisker-backend-key-pair\") pod \"whisker-675fc7968c-4vjnb\" (UID: \"3ccd1098-dcbf-4328-a4e8-54b9ea6e7ec9\") " pod="calico-system/whisker-675fc7968c-4vjnb" Apr 24 23:57:42.617713 kubelet[3267]: I0424 23:57:42.613557 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ccd1098-dcbf-4328-a4e8-54b9ea6e7ec9-whisker-ca-bundle\") pod \"whisker-675fc7968c-4vjnb\" (UID: \"3ccd1098-dcbf-4328-a4e8-54b9ea6e7ec9\") " pod="calico-system/whisker-675fc7968c-4vjnb" Apr 24 23:57:42.617713 kubelet[3267]: I0424 23:57:42.613622 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh962\" (UniqueName: \"kubernetes.io/projected/3ccd1098-dcbf-4328-a4e8-54b9ea6e7ec9-kube-api-access-mh962\") pod \"whisker-675fc7968c-4vjnb\" (UID: \"3ccd1098-dcbf-4328-a4e8-54b9ea6e7ec9\") " pod="calico-system/whisker-675fc7968c-4vjnb" Apr 24 23:57:42.617713 kubelet[3267]: I0424 23:57:42.613655 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/3ccd1098-dcbf-4328-a4e8-54b9ea6e7ec9-nginx-config\") pod \"whisker-675fc7968c-4vjnb\" (UID: \"3ccd1098-dcbf-4328-a4e8-54b9ea6e7ec9\") " pod="calico-system/whisker-675fc7968c-4vjnb" Apr 24 23:57:42.887638 containerd[1723]: time="2026-04-24T23:57:42.887152618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-675fc7968c-4vjnb,Uid:3ccd1098-dcbf-4328-a4e8-54b9ea6e7ec9,Namespace:calico-system,Attempt:0,}" Apr 24 23:57:43.132938 systemd-networkd[1362]: calibedb26a67a9: Link UP Apr 24 23:57:43.134540 systemd-networkd[1362]: calibedb26a67a9: Gained carrier Apr 24 23:57:43.168395 containerd[1723]: 2026-04-24 23:57:42.977 [INFO][4704] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--b07cc1dc35-k8s-whisker--675fc7968c--4vjnb-eth0 whisker-675fc7968c- calico-system 3ccd1098-dcbf-4328-a4e8-54b9ea6e7ec9 950 0 2026-04-24 23:57:42 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:675fc7968c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-b07cc1dc35 whisker-675fc7968c-4vjnb eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calibedb26a67a9 [] [] }} ContainerID="b87dd4ff6303ff356d49be376ff2449075a0a342439fb280a340d260e66dcd5a" Namespace="calico-system" Pod="whisker-675fc7968c-4vjnb" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-whisker--675fc7968c--4vjnb-" Apr 24 23:57:43.168395 containerd[1723]: 2026-04-24 23:57:42.977 [INFO][4704] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b87dd4ff6303ff356d49be376ff2449075a0a342439fb280a340d260e66dcd5a" Namespace="calico-system" Pod="whisker-675fc7968c-4vjnb" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-whisker--675fc7968c--4vjnb-eth0" Apr 24 23:57:43.168395 containerd[1723]: 2026-04-24 23:57:43.028 [INFO][4717] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b87dd4ff6303ff356d49be376ff2449075a0a342439fb280a340d260e66dcd5a" HandleID="k8s-pod-network.b87dd4ff6303ff356d49be376ff2449075a0a342439fb280a340d260e66dcd5a" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-whisker--675fc7968c--4vjnb-eth0" Apr 24 23:57:43.168395 containerd[1723]: 2026-04-24 23:57:43.041 [INFO][4717] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b87dd4ff6303ff356d49be376ff2449075a0a342439fb280a340d260e66dcd5a" HandleID="k8s-pod-network.b87dd4ff6303ff356d49be376ff2449075a0a342439fb280a340d260e66dcd5a" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-whisker--675fc7968c--4vjnb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00036f950), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-b07cc1dc35", "pod":"whisker-675fc7968c-4vjnb", "timestamp":"2026-04-24 23:57:43.028226432 +0000 UTC"}, Hostname:"ci-4081.3.6-n-b07cc1dc35", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000264dc0)} Apr 24 23:57:43.168395 containerd[1723]: 2026-04-24 23:57:43.042 [INFO][4717] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:57:43.168395 containerd[1723]: 2026-04-24 23:57:43.042 [INFO][4717] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:57:43.168395 containerd[1723]: 2026-04-24 23:57:43.042 [INFO][4717] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-b07cc1dc35' Apr 24 23:57:43.168395 containerd[1723]: 2026-04-24 23:57:43.045 [INFO][4717] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b87dd4ff6303ff356d49be376ff2449075a0a342439fb280a340d260e66dcd5a" host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:43.168395 containerd[1723]: 2026-04-24 23:57:43.050 [INFO][4717] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:43.168395 containerd[1723]: 2026-04-24 23:57:43.056 [INFO][4717] ipam/ipam.go 526: Trying affinity for 192.168.54.128/26 host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:43.168395 containerd[1723]: 2026-04-24 23:57:43.058 [INFO][4717] ipam/ipam.go 160: Attempting to load block cidr=192.168.54.128/26 host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:43.168395 containerd[1723]: 2026-04-24 23:57:43.060 [INFO][4717] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.54.128/26 host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:43.168395 containerd[1723]: 2026-04-24 23:57:43.060 [INFO][4717] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.54.128/26 handle="k8s-pod-network.b87dd4ff6303ff356d49be376ff2449075a0a342439fb280a340d260e66dcd5a" host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:43.168395 containerd[1723]: 2026-04-24 23:57:43.062 [INFO][4717] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b87dd4ff6303ff356d49be376ff2449075a0a342439fb280a340d260e66dcd5a Apr 24 23:57:43.168395 containerd[1723]: 2026-04-24 23:57:43.069 [INFO][4717] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.54.128/26 handle="k8s-pod-network.b87dd4ff6303ff356d49be376ff2449075a0a342439fb280a340d260e66dcd5a" host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:43.168395 containerd[1723]: 2026-04-24 23:57:43.078 [INFO][4717] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.54.129/26] block=192.168.54.128/26 handle="k8s-pod-network.b87dd4ff6303ff356d49be376ff2449075a0a342439fb280a340d260e66dcd5a" host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:43.168395 containerd[1723]: 2026-04-24 23:57:43.078 [INFO][4717] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.54.129/26] handle="k8s-pod-network.b87dd4ff6303ff356d49be376ff2449075a0a342439fb280a340d260e66dcd5a" host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:43.168395 containerd[1723]: 2026-04-24 23:57:43.078 [INFO][4717] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:57:43.168395 containerd[1723]: 2026-04-24 23:57:43.078 [INFO][4717] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.54.129/26] IPv6=[] ContainerID="b87dd4ff6303ff356d49be376ff2449075a0a342439fb280a340d260e66dcd5a" HandleID="k8s-pod-network.b87dd4ff6303ff356d49be376ff2449075a0a342439fb280a340d260e66dcd5a" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-whisker--675fc7968c--4vjnb-eth0" Apr 24 23:57:43.169326 containerd[1723]: 2026-04-24 23:57:43.083 [INFO][4704] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b87dd4ff6303ff356d49be376ff2449075a0a342439fb280a340d260e66dcd5a" Namespace="calico-system" Pod="whisker-675fc7968c-4vjnb" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-whisker--675fc7968c--4vjnb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--b07cc1dc35-k8s-whisker--675fc7968c--4vjnb-eth0", GenerateName:"whisker-675fc7968c-", Namespace:"calico-system", SelfLink:"", UID:"3ccd1098-dcbf-4328-a4e8-54b9ea6e7ec9", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 57, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"675fc7968c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-b07cc1dc35", ContainerID:"", Pod:"whisker-675fc7968c-4vjnb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.54.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibedb26a67a9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:57:43.169326 containerd[1723]: 2026-04-24 23:57:43.083 [INFO][4704] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.54.129/32] ContainerID="b87dd4ff6303ff356d49be376ff2449075a0a342439fb280a340d260e66dcd5a" Namespace="calico-system" Pod="whisker-675fc7968c-4vjnb" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-whisker--675fc7968c--4vjnb-eth0" Apr 24 23:57:43.169326 containerd[1723]: 2026-04-24 23:57:43.084 [INFO][4704] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibedb26a67a9 ContainerID="b87dd4ff6303ff356d49be376ff2449075a0a342439fb280a340d260e66dcd5a" Namespace="calico-system" Pod="whisker-675fc7968c-4vjnb" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-whisker--675fc7968c--4vjnb-eth0" Apr 24 23:57:43.169326 containerd[1723]: 2026-04-24 23:57:43.135 [INFO][4704] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b87dd4ff6303ff356d49be376ff2449075a0a342439fb280a340d260e66dcd5a" Namespace="calico-system" Pod="whisker-675fc7968c-4vjnb" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-whisker--675fc7968c--4vjnb-eth0" Apr 24 23:57:43.169326 containerd[1723]: 2026-04-24 23:57:43.136 [INFO][4704] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b87dd4ff6303ff356d49be376ff2449075a0a342439fb280a340d260e66dcd5a" Namespace="calico-system" Pod="whisker-675fc7968c-4vjnb" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-whisker--675fc7968c--4vjnb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--b07cc1dc35-k8s-whisker--675fc7968c--4vjnb-eth0", GenerateName:"whisker-675fc7968c-", Namespace:"calico-system", SelfLink:"", UID:"3ccd1098-dcbf-4328-a4e8-54b9ea6e7ec9", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 57, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"675fc7968c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-b07cc1dc35", ContainerID:"b87dd4ff6303ff356d49be376ff2449075a0a342439fb280a340d260e66dcd5a", Pod:"whisker-675fc7968c-4vjnb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.54.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibedb26a67a9", MAC:"de:ca:1d:ef:74:84", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:57:43.169326 containerd[1723]: 2026-04-24 23:57:43.161 [INFO][4704] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b87dd4ff6303ff356d49be376ff2449075a0a342439fb280a340d260e66dcd5a" Namespace="calico-system" Pod="whisker-675fc7968c-4vjnb" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-whisker--675fc7968c--4vjnb-eth0" Apr 24 23:57:43.209190 containerd[1723]: time="2026-04-24T23:57:43.208828684Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:57:43.209190 containerd[1723]: time="2026-04-24T23:57:43.208896985Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:57:43.209190 containerd[1723]: time="2026-04-24T23:57:43.208915985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:57:43.209190 containerd[1723]: time="2026-04-24T23:57:43.209010786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:57:43.240213 systemd-networkd[1362]: vxlan.calico: Link UP Apr 24 23:57:43.240224 systemd-networkd[1362]: vxlan.calico: Gained carrier Apr 24 23:57:43.259618 systemd[1]: Started cri-containerd-b87dd4ff6303ff356d49be376ff2449075a0a342439fb280a340d260e66dcd5a.scope - libcontainer container b87dd4ff6303ff356d49be376ff2449075a0a342439fb280a340d260e66dcd5a. Apr 24 23:57:43.321885 containerd[1723]: time="2026-04-24T23:57:43.321736316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-675fc7968c-4vjnb,Uid:3ccd1098-dcbf-4328-a4e8-54b9ea6e7ec9,Namespace:calico-system,Attempt:0,} returns sandbox id \"b87dd4ff6303ff356d49be376ff2449075a0a342439fb280a340d260e66dcd5a\"" Apr 24 23:57:43.324792 containerd[1723]: time="2026-04-24T23:57:43.324671256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 24 23:57:44.136049 containerd[1723]: time="2026-04-24T23:57:44.135998069Z" level=info msg="StopPodSandbox for \"ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51\"" Apr 24 23:57:44.143107 kubelet[3267]: I0424 23:57:44.142710 3267 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba1252cc-9f02-40b0-83cf-8bd40241ac3c" path="/var/lib/kubelet/pods/ba1252cc-9f02-40b0-83cf-8bd40241ac3c/volumes" Apr 24 23:57:44.217224 containerd[1723]: 2026-04-24 23:57:44.173 [WARNING][4880] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-whisker--79bcdf8478--559x4-eth0" Apr 24 23:57:44.217224 containerd[1723]: 2026-04-24 23:57:44.174 [INFO][4880] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51" Apr 24 23:57:44.217224 containerd[1723]: 2026-04-24 23:57:44.174 [INFO][4880] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51" iface="eth0" netns="" Apr 24 23:57:44.217224 containerd[1723]: 2026-04-24 23:57:44.174 [INFO][4880] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51" Apr 24 23:57:44.217224 containerd[1723]: 2026-04-24 23:57:44.174 [INFO][4880] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51" Apr 24 23:57:44.217224 containerd[1723]: 2026-04-24 23:57:44.202 [INFO][4889] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51" HandleID="k8s-pod-network.ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-whisker--79bcdf8478--559x4-eth0" Apr 24 23:57:44.217224 containerd[1723]: 2026-04-24 23:57:44.203 [INFO][4889] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:57:44.217224 containerd[1723]: 2026-04-24 23:57:44.203 [INFO][4889] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:57:44.217224 containerd[1723]: 2026-04-24 23:57:44.211 [WARNING][4889] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51" HandleID="k8s-pod-network.ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-whisker--79bcdf8478--559x4-eth0" Apr 24 23:57:44.217224 containerd[1723]: 2026-04-24 23:57:44.212 [INFO][4889] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51" HandleID="k8s-pod-network.ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-whisker--79bcdf8478--559x4-eth0" Apr 24 23:57:44.217224 containerd[1723]: 2026-04-24 23:57:44.214 [INFO][4889] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:57:44.217224 containerd[1723]: 2026-04-24 23:57:44.215 [INFO][4880] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51" Apr 24 23:57:44.217788 containerd[1723]: time="2026-04-24T23:57:44.217277172Z" level=info msg="TearDown network for sandbox \"ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51\" successfully" Apr 24 23:57:44.217788 containerd[1723]: time="2026-04-24T23:57:44.217336473Z" level=info msg="StopPodSandbox for \"ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51\" returns successfully" Apr 24 23:57:44.218086 containerd[1723]: time="2026-04-24T23:57:44.218051383Z" level=info msg="RemovePodSandbox for \"ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51\"" Apr 24 23:57:44.218188 containerd[1723]: time="2026-04-24T23:57:44.218091383Z" level=info msg="Forcibly stopping sandbox \"ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51\"" Apr 24 23:57:44.292735 containerd[1723]: 2026-04-24 23:57:44.261 [WARNING][4903] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-whisker--79bcdf8478--559x4-eth0" Apr 24 23:57:44.292735 containerd[1723]: 2026-04-24 23:57:44.261 [INFO][4903] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51" Apr 24 23:57:44.292735 containerd[1723]: 2026-04-24 23:57:44.261 [INFO][4903] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51" iface="eth0" netns="" Apr 24 23:57:44.292735 containerd[1723]: 2026-04-24 23:57:44.261 [INFO][4903] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51" Apr 24 23:57:44.292735 containerd[1723]: 2026-04-24 23:57:44.261 [INFO][4903] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51" Apr 24 23:57:44.292735 containerd[1723]: 2026-04-24 23:57:44.282 [INFO][4910] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51" HandleID="k8s-pod-network.ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-whisker--79bcdf8478--559x4-eth0" Apr 24 23:57:44.292735 containerd[1723]: 2026-04-24 23:57:44.282 [INFO][4910] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:57:44.292735 containerd[1723]: 2026-04-24 23:57:44.282 [INFO][4910] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:57:44.292735 containerd[1723]: 2026-04-24 23:57:44.288 [WARNING][4910] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51" HandleID="k8s-pod-network.ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-whisker--79bcdf8478--559x4-eth0" Apr 24 23:57:44.292735 containerd[1723]: 2026-04-24 23:57:44.288 [INFO][4910] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51" HandleID="k8s-pod-network.ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-whisker--79bcdf8478--559x4-eth0" Apr 24 23:57:44.292735 containerd[1723]: 2026-04-24 23:57:44.290 [INFO][4910] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:57:44.292735 containerd[1723]: 2026-04-24 23:57:44.291 [INFO][4903] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51" Apr 24 23:57:44.292735 containerd[1723]: time="2026-04-24T23:57:44.292600495Z" level=info msg="TearDown network for sandbox \"ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51\" successfully" Apr 24 23:57:44.302217 containerd[1723]: time="2026-04-24T23:57:44.301910521Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:57:44.302217 containerd[1723]: time="2026-04-24T23:57:44.302054123Z" level=info msg="RemovePodSandbox \"ae7161425a973cce1c976dfcf8f26375220851ba92cb2fd758335383238d1d51\" returns successfully" Apr 24 23:57:44.403972 systemd-networkd[1362]: calibedb26a67a9: Gained IPv6LL Apr 24 23:57:44.561758 containerd[1723]: time="2026-04-24T23:57:44.561702047Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:57:44.563903 containerd[1723]: time="2026-04-24T23:57:44.563695674Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 24 23:57:44.566593 containerd[1723]: time="2026-04-24T23:57:44.566411511Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:57:44.571154 containerd[1723]: time="2026-04-24T23:57:44.571050974Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:57:44.571915 containerd[1723]: time="2026-04-24T23:57:44.571757784Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.247029627s" Apr 24 23:57:44.571915 containerd[1723]: time="2026-04-24T23:57:44.571798184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 24 23:57:44.579302 containerd[1723]: time="2026-04-24T23:57:44.579269386Z" level=info msg="CreateContainer within sandbox \"b87dd4ff6303ff356d49be376ff2449075a0a342439fb280a340d260e66dcd5a\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 24 23:57:44.611212 containerd[1723]: time="2026-04-24T23:57:44.611157019Z" level=info msg="CreateContainer within sandbox \"b87dd4ff6303ff356d49be376ff2449075a0a342439fb280a340d260e66dcd5a\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"822e33c527bc2f16a0bba00c02083e17a2e7dad9dc93d80283cf0d51d4d02ee2\"" Apr 24 23:57:44.612185 containerd[1723]: time="2026-04-24T23:57:44.612140332Z" level=info msg="StartContainer for \"822e33c527bc2f16a0bba00c02083e17a2e7dad9dc93d80283cf0d51d4d02ee2\"" Apr 24 23:57:44.651211 systemd[1]: run-containerd-runc-k8s.io-822e33c527bc2f16a0bba00c02083e17a2e7dad9dc93d80283cf0d51d4d02ee2-runc.ITR7GM.mount: Deactivated successfully. Apr 24 23:57:44.657508 systemd[1]: Started cri-containerd-822e33c527bc2f16a0bba00c02083e17a2e7dad9dc93d80283cf0d51d4d02ee2.scope - libcontainer container 822e33c527bc2f16a0bba00c02083e17a2e7dad9dc93d80283cf0d51d4d02ee2. Apr 24 23:57:44.703299 containerd[1723]: time="2026-04-24T23:57:44.703156367Z" level=info msg="StartContainer for \"822e33c527bc2f16a0bba00c02083e17a2e7dad9dc93d80283cf0d51d4d02ee2\" returns successfully" Apr 24 23:57:44.706567 containerd[1723]: time="2026-04-24T23:57:44.706508013Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 24 23:57:45.043765 systemd-networkd[1362]: vxlan.calico: Gained IPv6LL Apr 24 23:57:46.273597 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4013858910.mount: Deactivated successfully. Apr 24 23:57:46.321533 containerd[1723]: time="2026-04-24T23:57:46.321475655Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:57:46.323804 containerd[1723]: time="2026-04-24T23:57:46.323738887Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 24 23:57:46.326972 containerd[1723]: time="2026-04-24T23:57:46.326908731Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:57:46.331218 containerd[1723]: time="2026-04-24T23:57:46.331035988Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:57:46.332425 containerd[1723]: time="2026-04-24T23:57:46.331750498Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.625182384s" Apr 24 23:57:46.332425 containerd[1723]: time="2026-04-24T23:57:46.331830399Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 24 23:57:46.339410 containerd[1723]: time="2026-04-24T23:57:46.339380304Z" level=info msg="CreateContainer within sandbox \"b87dd4ff6303ff356d49be376ff2449075a0a342439fb280a340d260e66dcd5a\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 24 23:57:46.372083 containerd[1723]: time="2026-04-24T23:57:46.372027056Z" level=info msg="CreateContainer within sandbox \"b87dd4ff6303ff356d49be376ff2449075a0a342439fb280a340d260e66dcd5a\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"f93024759f672bb2f854f6fabbd8ca972551f56db017e7915aeef51956c38d80\"" Apr 24 23:57:46.373236 containerd[1723]: time="2026-04-24T23:57:46.373199473Z" level=info msg="StartContainer for \"f93024759f672bb2f854f6fabbd8ca972551f56db017e7915aeef51956c38d80\"" Apr 24 23:57:46.412568 systemd[1]: Started cri-containerd-f93024759f672bb2f854f6fabbd8ca972551f56db017e7915aeef51956c38d80.scope - libcontainer container f93024759f672bb2f854f6fabbd8ca972551f56db017e7915aeef51956c38d80. Apr 24 23:57:46.460761 containerd[1723]: time="2026-04-24T23:57:46.460713086Z" level=info msg="StartContainer for \"f93024759f672bb2f854f6fabbd8ca972551f56db017e7915aeef51956c38d80\" returns successfully" Apr 24 23:57:47.466823 kubelet[3267]: I0424 23:57:47.466746 3267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-675fc7968c-4vjnb" podStartSLOduration=2.4577168719999998 podStartE2EDuration="5.46672634s" podCreationTimestamp="2026-04-24 23:57:42 +0000 UTC" firstStartedPulling="2026-04-24 23:57:43.323734044 +0000 UTC m=+59.285161567" lastFinishedPulling="2026-04-24 23:57:46.332743512 +0000 UTC m=+62.294171035" observedRunningTime="2026-04-24 23:57:47.466626739 +0000 UTC m=+63.428054362" watchObservedRunningTime="2026-04-24 23:57:47.46672634 +0000 UTC m=+63.428153863" Apr 24 23:57:51.139368 containerd[1723]: time="2026-04-24T23:57:51.138875873Z" level=info msg="StopPodSandbox for \"51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4\"" Apr 24 23:57:51.223221 containerd[1723]: 2026-04-24 23:57:51.184 [INFO][5041] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4" Apr 24 23:57:51.223221 containerd[1723]: 2026-04-24 23:57:51.184 [INFO][5041] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4" iface="eth0" netns="/var/run/netns/cni-25f58bd5-885b-a75e-18de-0cf3c5d642af" Apr 24 23:57:51.223221 containerd[1723]: 2026-04-24 23:57:51.185 [INFO][5041] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4" iface="eth0" netns="/var/run/netns/cni-25f58bd5-885b-a75e-18de-0cf3c5d642af" Apr 24 23:57:51.223221 containerd[1723]: 2026-04-24 23:57:51.187 [INFO][5041] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4" iface="eth0" netns="/var/run/netns/cni-25f58bd5-885b-a75e-18de-0cf3c5d642af" Apr 24 23:57:51.223221 containerd[1723]: 2026-04-24 23:57:51.187 [INFO][5041] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4" Apr 24 23:57:51.223221 containerd[1723]: 2026-04-24 23:57:51.187 [INFO][5041] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4" Apr 24 23:57:51.223221 containerd[1723]: 2026-04-24 23:57:51.210 [INFO][5049] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4" HandleID="k8s-pod-network.51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4hsvs-eth0" Apr 24 23:57:51.223221 containerd[1723]: 2026-04-24 23:57:51.210 [INFO][5049] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:57:51.223221 containerd[1723]: 2026-04-24 23:57:51.210 [INFO][5049] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:57:51.223221 containerd[1723]: 2026-04-24 23:57:51.219 [WARNING][5049] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4" HandleID="k8s-pod-network.51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4hsvs-eth0" Apr 24 23:57:51.223221 containerd[1723]: 2026-04-24 23:57:51.219 [INFO][5049] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4" HandleID="k8s-pod-network.51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4hsvs-eth0" Apr 24 23:57:51.223221 containerd[1723]: 2026-04-24 23:57:51.220 [INFO][5049] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:57:51.223221 containerd[1723]: 2026-04-24 23:57:51.221 [INFO][5041] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4" Apr 24 23:57:51.225612 containerd[1723]: time="2026-04-24T23:57:51.225530675Z" level=info msg="TearDown network for sandbox \"51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4\" successfully" Apr 24 23:57:51.225612 containerd[1723]: time="2026-04-24T23:57:51.225578976Z" level=info msg="StopPodSandbox for \"51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4\" returns successfully" Apr 24 23:57:51.228437 containerd[1723]: time="2026-04-24T23:57:51.226595390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4hsvs,Uid:67de6f2b-7589-40b6-8033-934e9c5ab432,Namespace:kube-system,Attempt:1,}" Apr 24 23:57:51.228706 systemd[1]: run-netns-cni\x2d25f58bd5\x2d885b\x2da75e\x2d18de\x2d0cf3c5d642af.mount: Deactivated successfully. Apr 24 23:57:51.369807 systemd-networkd[1362]: calicd99befcb2b: Link UP Apr 24 23:57:51.371494 systemd-networkd[1362]: calicd99befcb2b: Gained carrier Apr 24 23:57:51.395980 containerd[1723]: 2026-04-24 23:57:51.300 [INFO][5055] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4hsvs-eth0 coredns-674b8bbfcf- kube-system 67de6f2b-7589-40b6-8033-934e9c5ab432 989 0 2026-04-24 23:56:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-b07cc1dc35 coredns-674b8bbfcf-4hsvs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicd99befcb2b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="738a818fcaaff016bd31c767553f19688f3d922dc60b74eb233153e1a5b1dbd0" Namespace="kube-system" Pod="coredns-674b8bbfcf-4hsvs" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4hsvs-" Apr 24 23:57:51.395980 containerd[1723]: 2026-04-24 23:57:51.300 [INFO][5055] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="738a818fcaaff016bd31c767553f19688f3d922dc60b74eb233153e1a5b1dbd0" Namespace="kube-system" Pod="coredns-674b8bbfcf-4hsvs" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4hsvs-eth0" Apr 24 23:57:51.395980 containerd[1723]: 2026-04-24 23:57:51.326 [INFO][5067] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="738a818fcaaff016bd31c767553f19688f3d922dc60b74eb233153e1a5b1dbd0" HandleID="k8s-pod-network.738a818fcaaff016bd31c767553f19688f3d922dc60b74eb233153e1a5b1dbd0" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4hsvs-eth0" Apr 24 23:57:51.395980 containerd[1723]: 2026-04-24 23:57:51.334 [INFO][5067] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="738a818fcaaff016bd31c767553f19688f3d922dc60b74eb233153e1a5b1dbd0" HandleID="k8s-pod-network.738a818fcaaff016bd31c767553f19688f3d922dc60b74eb233153e1a5b1dbd0" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4hsvs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ef910), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-b07cc1dc35", "pod":"coredns-674b8bbfcf-4hsvs", "timestamp":"2026-04-24 23:57:51.326875781 +0000 UTC"}, Hostname:"ci-4081.3.6-n-b07cc1dc35", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00040ef20)} Apr 24 23:57:51.395980 containerd[1723]: 2026-04-24 23:57:51.334 [INFO][5067] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:57:51.395980 containerd[1723]: 2026-04-24 23:57:51.334 [INFO][5067] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:57:51.395980 containerd[1723]: 2026-04-24 23:57:51.334 [INFO][5067] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-b07cc1dc35' Apr 24 23:57:51.395980 containerd[1723]: 2026-04-24 23:57:51.337 [INFO][5067] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.738a818fcaaff016bd31c767553f19688f3d922dc60b74eb233153e1a5b1dbd0" host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:51.395980 containerd[1723]: 2026-04-24 23:57:51.340 [INFO][5067] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:51.395980 containerd[1723]: 2026-04-24 23:57:51.346 [INFO][5067] ipam/ipam.go 526: Trying affinity for 192.168.54.128/26 host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:51.395980 containerd[1723]: 2026-04-24 23:57:51.347 [INFO][5067] ipam/ipam.go 160: Attempting to load block cidr=192.168.54.128/26 host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:51.395980 containerd[1723]: 2026-04-24 23:57:51.349 [INFO][5067] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.54.128/26 host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:51.395980 containerd[1723]: 2026-04-24 23:57:51.349 [INFO][5067] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.54.128/26 handle="k8s-pod-network.738a818fcaaff016bd31c767553f19688f3d922dc60b74eb233153e1a5b1dbd0" host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:51.395980 containerd[1723]: 2026-04-24 23:57:51.351 [INFO][5067] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.738a818fcaaff016bd31c767553f19688f3d922dc60b74eb233153e1a5b1dbd0 Apr 24 23:57:51.395980 containerd[1723]: 2026-04-24 23:57:51.355 [INFO][5067] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.54.128/26 handle="k8s-pod-network.738a818fcaaff016bd31c767553f19688f3d922dc60b74eb233153e1a5b1dbd0" host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:51.395980 containerd[1723]: 2026-04-24 23:57:51.363 [INFO][5067] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.54.130/26] block=192.168.54.128/26 handle="k8s-pod-network.738a818fcaaff016bd31c767553f19688f3d922dc60b74eb233153e1a5b1dbd0" host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:51.395980 containerd[1723]: 2026-04-24 23:57:51.363 [INFO][5067] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.54.130/26] handle="k8s-pod-network.738a818fcaaff016bd31c767553f19688f3d922dc60b74eb233153e1a5b1dbd0" host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:51.395980 containerd[1723]: 2026-04-24 23:57:51.363 [INFO][5067] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:57:51.395980 containerd[1723]: 2026-04-24 23:57:51.363 [INFO][5067] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.54.130/26] IPv6=[] ContainerID="738a818fcaaff016bd31c767553f19688f3d922dc60b74eb233153e1a5b1dbd0" HandleID="k8s-pod-network.738a818fcaaff016bd31c767553f19688f3d922dc60b74eb233153e1a5b1dbd0" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4hsvs-eth0" Apr 24 23:57:51.398663 containerd[1723]: 2026-04-24 23:57:51.365 [INFO][5055] cni-plugin/k8s.go 418: Populated endpoint ContainerID="738a818fcaaff016bd31c767553f19688f3d922dc60b74eb233153e1a5b1dbd0" Namespace="kube-system" Pod="coredns-674b8bbfcf-4hsvs" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4hsvs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4hsvs-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"67de6f2b-7589-40b6-8033-934e9c5ab432", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 56, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-b07cc1dc35", ContainerID:"", Pod:"coredns-674b8bbfcf-4hsvs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicd99befcb2b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:57:51.398663 containerd[1723]: 2026-04-24 23:57:51.365 [INFO][5055] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.54.130/32] ContainerID="738a818fcaaff016bd31c767553f19688f3d922dc60b74eb233153e1a5b1dbd0" Namespace="kube-system" Pod="coredns-674b8bbfcf-4hsvs" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4hsvs-eth0" Apr 24 23:57:51.398663 containerd[1723]: 2026-04-24 23:57:51.366 [INFO][5055] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicd99befcb2b ContainerID="738a818fcaaff016bd31c767553f19688f3d922dc60b74eb233153e1a5b1dbd0" Namespace="kube-system" Pod="coredns-674b8bbfcf-4hsvs" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4hsvs-eth0" Apr 24 23:57:51.398663 containerd[1723]: 2026-04-24 23:57:51.374 [INFO][5055] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="738a818fcaaff016bd31c767553f19688f3d922dc60b74eb233153e1a5b1dbd0" Namespace="kube-system" Pod="coredns-674b8bbfcf-4hsvs" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4hsvs-eth0" Apr 24 23:57:51.398663 containerd[1723]: 2026-04-24 23:57:51.374 [INFO][5055] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="738a818fcaaff016bd31c767553f19688f3d922dc60b74eb233153e1a5b1dbd0" Namespace="kube-system" Pod="coredns-674b8bbfcf-4hsvs" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4hsvs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4hsvs-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"67de6f2b-7589-40b6-8033-934e9c5ab432", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 56, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-b07cc1dc35", ContainerID:"738a818fcaaff016bd31c767553f19688f3d922dc60b74eb233153e1a5b1dbd0", Pod:"coredns-674b8bbfcf-4hsvs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicd99befcb2b", MAC:"ea:28:2d:f5:67:17", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:57:51.398663 containerd[1723]: 2026-04-24 23:57:51.391 [INFO][5055] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="738a818fcaaff016bd31c767553f19688f3d922dc60b74eb233153e1a5b1dbd0" Namespace="kube-system" Pod="coredns-674b8bbfcf-4hsvs" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4hsvs-eth0" Apr 24 23:57:51.430445 containerd[1723]: time="2026-04-24T23:57:51.430189114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:57:51.430445 containerd[1723]: time="2026-04-24T23:57:51.430253315Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:57:51.430445 containerd[1723]: time="2026-04-24T23:57:51.430268615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:57:51.430445 containerd[1723]: time="2026-04-24T23:57:51.430382917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:57:51.473796 systemd[1]: Started cri-containerd-738a818fcaaff016bd31c767553f19688f3d922dc60b74eb233153e1a5b1dbd0.scope - libcontainer container 738a818fcaaff016bd31c767553f19688f3d922dc60b74eb233153e1a5b1dbd0. Apr 24 23:57:51.533327 containerd[1723]: time="2026-04-24T23:57:51.533268744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4hsvs,Uid:67de6f2b-7589-40b6-8033-934e9c5ab432,Namespace:kube-system,Attempt:1,} returns sandbox id \"738a818fcaaff016bd31c767553f19688f3d922dc60b74eb233153e1a5b1dbd0\"" Apr 24 23:57:51.541667 containerd[1723]: time="2026-04-24T23:57:51.541607359Z" level=info msg="CreateContainer within sandbox \"738a818fcaaff016bd31c767553f19688f3d922dc60b74eb233153e1a5b1dbd0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 24 23:57:51.582806 containerd[1723]: time="2026-04-24T23:57:51.582753730Z" level=info msg="CreateContainer within sandbox \"738a818fcaaff016bd31c767553f19688f3d922dc60b74eb233153e1a5b1dbd0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"638a86170c31b81bd14561dd35fefb381521f98ac376119a10fe3eadcd2ea51d\"" Apr 24 23:57:51.583706 containerd[1723]: time="2026-04-24T23:57:51.583675043Z" level=info msg="StartContainer for \"638a86170c31b81bd14561dd35fefb381521f98ac376119a10fe3eadcd2ea51d\"" Apr 24 23:57:51.613550 systemd[1]: Started cri-containerd-638a86170c31b81bd14561dd35fefb381521f98ac376119a10fe3eadcd2ea51d.scope - libcontainer container 638a86170c31b81bd14561dd35fefb381521f98ac376119a10fe3eadcd2ea51d. Apr 24 23:57:51.644928 containerd[1723]: time="2026-04-24T23:57:51.644876792Z" level=info msg="StartContainer for \"638a86170c31b81bd14561dd35fefb381521f98ac376119a10fe3eadcd2ea51d\" returns successfully" Apr 24 23:57:52.481522 kubelet[3267]: I0424 23:57:52.481444 3267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-4hsvs" podStartSLOduration=61.481421495 podStartE2EDuration="1m1.481421495s" podCreationTimestamp="2026-04-24 23:56:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:57:52.479863473 +0000 UTC m=+68.441290996" watchObservedRunningTime="2026-04-24 23:57:52.481421495 +0000 UTC m=+68.442849018" Apr 24 23:57:52.979516 systemd-networkd[1362]: calicd99befcb2b: Gained IPv6LL Apr 24 23:57:53.140301 containerd[1723]: time="2026-04-24T23:57:53.139598132Z" level=info msg="StopPodSandbox for \"169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de\"" Apr 24 23:57:53.140301 containerd[1723]: time="2026-04-24T23:57:53.139889936Z" level=info msg="StopPodSandbox for \"ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3\"" Apr 24 23:57:53.256808 containerd[1723]: 2026-04-24 23:57:53.211 [INFO][5203] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3" Apr 24 23:57:53.256808 containerd[1723]: 2026-04-24 23:57:53.211 [INFO][5203] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3" iface="eth0" netns="/var/run/netns/cni-b5c528e3-c7a7-8ba0-2657-13b8444962b6" Apr 24 23:57:53.256808 containerd[1723]: 2026-04-24 23:57:53.212 [INFO][5203] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3" iface="eth0" netns="/var/run/netns/cni-b5c528e3-c7a7-8ba0-2657-13b8444962b6" Apr 24 23:57:53.256808 containerd[1723]: 2026-04-24 23:57:53.212 [INFO][5203] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3" iface="eth0" netns="/var/run/netns/cni-b5c528e3-c7a7-8ba0-2657-13b8444962b6" Apr 24 23:57:53.256808 containerd[1723]: 2026-04-24 23:57:53.212 [INFO][5203] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3" Apr 24 23:57:53.256808 containerd[1723]: 2026-04-24 23:57:53.212 [INFO][5203] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3" Apr 24 23:57:53.256808 containerd[1723]: 2026-04-24 23:57:53.244 [INFO][5220] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3" HandleID="k8s-pod-network.ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4cgp8-eth0" Apr 24 23:57:53.256808 containerd[1723]: 2026-04-24 23:57:53.245 [INFO][5220] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:57:53.256808 containerd[1723]: 2026-04-24 23:57:53.245 [INFO][5220] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:57:53.256808 containerd[1723]: 2026-04-24 23:57:53.251 [WARNING][5220] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3" HandleID="k8s-pod-network.ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4cgp8-eth0" Apr 24 23:57:53.256808 containerd[1723]: 2026-04-24 23:57:53.251 [INFO][5220] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3" HandleID="k8s-pod-network.ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4cgp8-eth0" Apr 24 23:57:53.256808 containerd[1723]: 2026-04-24 23:57:53.253 [INFO][5220] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:57:53.256808 containerd[1723]: 2026-04-24 23:57:53.255 [INFO][5203] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3" Apr 24 23:57:53.260324 containerd[1723]: time="2026-04-24T23:57:53.259413798Z" level=info msg="TearDown network for sandbox \"ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3\" successfully" Apr 24 23:57:53.260324 containerd[1723]: time="2026-04-24T23:57:53.259458099Z" level=info msg="StopPodSandbox for \"ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3\" returns successfully" Apr 24 23:57:53.264096 systemd[1]: run-netns-cni\x2db5c528e3\x2dc7a7\x2d8ba0\x2d2657\x2d13b8444962b6.mount: Deactivated successfully. Apr 24 23:57:53.268514 containerd[1723]: time="2026-04-24T23:57:53.268470224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4cgp8,Uid:d89c2d22-a648-4465-85aa-6b284aea19c0,Namespace:kube-system,Attempt:1,}" Apr 24 23:57:53.273637 containerd[1723]: 2026-04-24 23:57:53.205 [INFO][5210] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de" Apr 24 23:57:53.273637 containerd[1723]: 2026-04-24 23:57:53.205 [INFO][5210] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de" iface="eth0" netns="/var/run/netns/cni-c4f9bdad-d7c7-7bd6-56a3-cec786429351" Apr 24 23:57:53.273637 containerd[1723]: 2026-04-24 23:57:53.209 [INFO][5210] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de" iface="eth0" netns="/var/run/netns/cni-c4f9bdad-d7c7-7bd6-56a3-cec786429351" Apr 24 23:57:53.273637 containerd[1723]: 2026-04-24 23:57:53.212 [INFO][5210] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de" iface="eth0" netns="/var/run/netns/cni-c4f9bdad-d7c7-7bd6-56a3-cec786429351" Apr 24 23:57:53.273637 containerd[1723]: 2026-04-24 23:57:53.212 [INFO][5210] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de" Apr 24 23:57:53.273637 containerd[1723]: 2026-04-24 23:57:53.213 [INFO][5210] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de" Apr 24 23:57:53.273637 containerd[1723]: 2026-04-24 23:57:53.246 [INFO][5222] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de" HandleID="k8s-pod-network.169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--j8278-eth0" Apr 24 23:57:53.273637 containerd[1723]: 2026-04-24 23:57:53.246 [INFO][5222] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:57:53.273637 containerd[1723]: 2026-04-24 23:57:53.253 [INFO][5222] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:57:53.273637 containerd[1723]: 2026-04-24 23:57:53.266 [WARNING][5222] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de" HandleID="k8s-pod-network.169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--j8278-eth0" Apr 24 23:57:53.273637 containerd[1723]: 2026-04-24 23:57:53.267 [INFO][5222] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de" HandleID="k8s-pod-network.169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--j8278-eth0" Apr 24 23:57:53.273637 containerd[1723]: 2026-04-24 23:57:53.269 [INFO][5222] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:57:53.273637 containerd[1723]: 2026-04-24 23:57:53.271 [INFO][5210] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de" Apr 24 23:57:53.276547 containerd[1723]: time="2026-04-24T23:57:53.276421235Z" level=info msg="TearDown network for sandbox \"169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de\" successfully" Apr 24 23:57:53.276547 containerd[1723]: time="2026-04-24T23:57:53.276474136Z" level=info msg="StopPodSandbox for \"169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de\" returns successfully" Apr 24 23:57:53.280319 containerd[1723]: time="2026-04-24T23:57:53.280044085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7876b86597-j8278,Uid:4f4f44d0-9a79-44e1-a0dd-24732d17ab45,Namespace:calico-system,Attempt:1,}" Apr 24 23:57:53.281281 systemd[1]: run-netns-cni\x2dc4f9bdad\x2dd7c7\x2d7bd6\x2d56a3\x2dcec786429351.mount: Deactivated successfully. Apr 24 23:57:53.460101 systemd-networkd[1362]: cali4efccf0c213: Link UP Apr 24 23:57:53.460699 systemd-networkd[1362]: cali4efccf0c213: Gained carrier Apr 24 23:57:53.482101 containerd[1723]: 2026-04-24 23:57:53.380 [INFO][5243] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4cgp8-eth0 coredns-674b8bbfcf- kube-system d89c2d22-a648-4465-85aa-6b284aea19c0 1012 0 2026-04-24 23:56:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-b07cc1dc35 coredns-674b8bbfcf-4cgp8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4efccf0c213 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="79bbbddd77b76c6e40512f58e561df084988b2f0029264eb7463b8018bb1d9c7" Namespace="kube-system" Pod="coredns-674b8bbfcf-4cgp8" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4cgp8-" Apr 24 23:57:53.482101 containerd[1723]: 2026-04-24 23:57:53.380 [INFO][5243] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="79bbbddd77b76c6e40512f58e561df084988b2f0029264eb7463b8018bb1d9c7" Namespace="kube-system" Pod="coredns-674b8bbfcf-4cgp8" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4cgp8-eth0" Apr 24 23:57:53.482101 containerd[1723]: 2026-04-24 23:57:53.417 [INFO][5260] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="79bbbddd77b76c6e40512f58e561df084988b2f0029264eb7463b8018bb1d9c7" HandleID="k8s-pod-network.79bbbddd77b76c6e40512f58e561df084988b2f0029264eb7463b8018bb1d9c7" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4cgp8-eth0" Apr 24 23:57:53.482101 containerd[1723]: 2026-04-24 23:57:53.425 [INFO][5260] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="79bbbddd77b76c6e40512f58e561df084988b2f0029264eb7463b8018bb1d9c7" HandleID="k8s-pod-network.79bbbddd77b76c6e40512f58e561df084988b2f0029264eb7463b8018bb1d9c7" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4cgp8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ef6b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-b07cc1dc35", "pod":"coredns-674b8bbfcf-4cgp8", "timestamp":"2026-04-24 23:57:53.417302994 +0000 UTC"}, Hostname:"ci-4081.3.6-n-b07cc1dc35", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003cd080)} Apr 24 23:57:53.482101 containerd[1723]: 2026-04-24 23:57:53.425 [INFO][5260] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:57:53.482101 containerd[1723]: 2026-04-24 23:57:53.425 [INFO][5260] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:57:53.482101 containerd[1723]: 2026-04-24 23:57:53.426 [INFO][5260] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-b07cc1dc35' Apr 24 23:57:53.482101 containerd[1723]: 2026-04-24 23:57:53.428 [INFO][5260] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.79bbbddd77b76c6e40512f58e561df084988b2f0029264eb7463b8018bb1d9c7" host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:53.482101 containerd[1723]: 2026-04-24 23:57:53.433 [INFO][5260] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:53.482101 containerd[1723]: 2026-04-24 23:57:53.436 [INFO][5260] ipam/ipam.go 526: Trying affinity for 192.168.54.128/26 host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:53.482101 containerd[1723]: 2026-04-24 23:57:53.438 [INFO][5260] ipam/ipam.go 160: Attempting to load block cidr=192.168.54.128/26 host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:53.482101 containerd[1723]: 2026-04-24 23:57:53.440 [INFO][5260] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.54.128/26 host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:53.482101 containerd[1723]: 2026-04-24 23:57:53.440 [INFO][5260] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.54.128/26 handle="k8s-pod-network.79bbbddd77b76c6e40512f58e561df084988b2f0029264eb7463b8018bb1d9c7" host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:53.482101 containerd[1723]: 2026-04-24 23:57:53.442 [INFO][5260] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.79bbbddd77b76c6e40512f58e561df084988b2f0029264eb7463b8018bb1d9c7 Apr 24 23:57:53.482101 containerd[1723]: 2026-04-24 23:57:53.446 [INFO][5260] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.54.128/26 handle="k8s-pod-network.79bbbddd77b76c6e40512f58e561df084988b2f0029264eb7463b8018bb1d9c7" host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:53.482101 containerd[1723]: 2026-04-24 23:57:53.453 [INFO][5260] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.54.131/26] block=192.168.54.128/26 handle="k8s-pod-network.79bbbddd77b76c6e40512f58e561df084988b2f0029264eb7463b8018bb1d9c7" host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:53.482101 containerd[1723]: 2026-04-24 23:57:53.454 [INFO][5260] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.54.131/26] handle="k8s-pod-network.79bbbddd77b76c6e40512f58e561df084988b2f0029264eb7463b8018bb1d9c7" host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:53.482101 containerd[1723]: 2026-04-24 23:57:53.454 [INFO][5260] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:57:53.482101 containerd[1723]: 2026-04-24 23:57:53.454 [INFO][5260] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.54.131/26] IPv6=[] ContainerID="79bbbddd77b76c6e40512f58e561df084988b2f0029264eb7463b8018bb1d9c7" HandleID="k8s-pod-network.79bbbddd77b76c6e40512f58e561df084988b2f0029264eb7463b8018bb1d9c7" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4cgp8-eth0" Apr 24 23:57:53.483054 containerd[1723]: 2026-04-24 23:57:53.456 [INFO][5243] cni-plugin/k8s.go 418: Populated endpoint ContainerID="79bbbddd77b76c6e40512f58e561df084988b2f0029264eb7463b8018bb1d9c7" Namespace="kube-system" Pod="coredns-674b8bbfcf-4cgp8" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4cgp8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4cgp8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d89c2d22-a648-4465-85aa-6b284aea19c0", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 56, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-b07cc1dc35", ContainerID:"", Pod:"coredns-674b8bbfcf-4cgp8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4efccf0c213", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:57:53.483054 containerd[1723]: 2026-04-24 23:57:53.456 [INFO][5243] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.54.131/32] ContainerID="79bbbddd77b76c6e40512f58e561df084988b2f0029264eb7463b8018bb1d9c7" Namespace="kube-system" Pod="coredns-674b8bbfcf-4cgp8" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4cgp8-eth0" Apr 24 23:57:53.483054 containerd[1723]: 2026-04-24 23:57:53.456 [INFO][5243] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4efccf0c213 ContainerID="79bbbddd77b76c6e40512f58e561df084988b2f0029264eb7463b8018bb1d9c7" Namespace="kube-system" Pod="coredns-674b8bbfcf-4cgp8" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4cgp8-eth0" Apr 24 23:57:53.483054 containerd[1723]: 2026-04-24 23:57:53.460 [INFO][5243] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="79bbbddd77b76c6e40512f58e561df084988b2f0029264eb7463b8018bb1d9c7" Namespace="kube-system" Pod="coredns-674b8bbfcf-4cgp8" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4cgp8-eth0" Apr 24 23:57:53.483054 containerd[1723]: 2026-04-24 23:57:53.461 [INFO][5243] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="79bbbddd77b76c6e40512f58e561df084988b2f0029264eb7463b8018bb1d9c7" Namespace="kube-system" Pod="coredns-674b8bbfcf-4cgp8" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4cgp8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4cgp8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d89c2d22-a648-4465-85aa-6b284aea19c0", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 56, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-b07cc1dc35", ContainerID:"79bbbddd77b76c6e40512f58e561df084988b2f0029264eb7463b8018bb1d9c7", Pod:"coredns-674b8bbfcf-4cgp8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4efccf0c213", MAC:"86:5d:d3:27:df:00", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:57:53.483054 containerd[1723]: 2026-04-24 23:57:53.479 [INFO][5243] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="79bbbddd77b76c6e40512f58e561df084988b2f0029264eb7463b8018bb1d9c7" Namespace="kube-system" Pod="coredns-674b8bbfcf-4cgp8" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4cgp8-eth0" Apr 24 23:57:53.520271 containerd[1723]: time="2026-04-24T23:57:53.520095124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:57:53.520708 containerd[1723]: time="2026-04-24T23:57:53.520475029Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:57:53.520708 containerd[1723]: time="2026-04-24T23:57:53.520539230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:57:53.520708 containerd[1723]: time="2026-04-24T23:57:53.520665132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:57:53.555193 systemd[1]: Started cri-containerd-79bbbddd77b76c6e40512f58e561df084988b2f0029264eb7463b8018bb1d9c7.scope - libcontainer container 79bbbddd77b76c6e40512f58e561df084988b2f0029264eb7463b8018bb1d9c7. Apr 24 23:57:53.586449 systemd-networkd[1362]: cali5845e2d69ba: Link UP Apr 24 23:57:53.587230 systemd-networkd[1362]: cali5845e2d69ba: Gained carrier Apr 24 23:57:53.622029 containerd[1723]: 2026-04-24 23:57:53.376 [INFO][5234] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--j8278-eth0 calico-apiserver-7876b86597- calico-system 4f4f44d0-9a79-44e1-a0dd-24732d17ab45 1011 0 2026-04-24 23:57:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7876b86597 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-b07cc1dc35 calico-apiserver-7876b86597-j8278 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali5845e2d69ba [] [] }} ContainerID="21269294092760e3b7798f276b30cfab07be1596bc323d2c0251aedb68af9f63" Namespace="calico-system" Pod="calico-apiserver-7876b86597-j8278" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--j8278-" Apr 24 23:57:53.622029 containerd[1723]: 2026-04-24 23:57:53.376 [INFO][5234] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="21269294092760e3b7798f276b30cfab07be1596bc323d2c0251aedb68af9f63" Namespace="calico-system" Pod="calico-apiserver-7876b86597-j8278" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--j8278-eth0" Apr 24 23:57:53.622029 containerd[1723]: 2026-04-24 23:57:53.415 [INFO][5258] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="21269294092760e3b7798f276b30cfab07be1596bc323d2c0251aedb68af9f63" HandleID="k8s-pod-network.21269294092760e3b7798f276b30cfab07be1596bc323d2c0251aedb68af9f63" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--j8278-eth0" Apr 24 23:57:53.622029 containerd[1723]: 2026-04-24 23:57:53.426 [INFO][5258] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="21269294092760e3b7798f276b30cfab07be1596bc323d2c0251aedb68af9f63" HandleID="k8s-pod-network.21269294092760e3b7798f276b30cfab07be1596bc323d2c0251aedb68af9f63" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--j8278-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fddc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-b07cc1dc35", "pod":"calico-apiserver-7876b86597-j8278", "timestamp":"2026-04-24 23:57:53.415741573 +0000 UTC"}, Hostname:"ci-4081.3.6-n-b07cc1dc35", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002e51e0)} Apr 24 23:57:53.622029 containerd[1723]: 2026-04-24 23:57:53.426 [INFO][5258] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:57:53.622029 containerd[1723]: 2026-04-24 23:57:53.454 [INFO][5258] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:57:53.622029 containerd[1723]: 2026-04-24 23:57:53.454 [INFO][5258] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-b07cc1dc35' Apr 24 23:57:53.622029 containerd[1723]: 2026-04-24 23:57:53.532 [INFO][5258] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.21269294092760e3b7798f276b30cfab07be1596bc323d2c0251aedb68af9f63" host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:53.622029 containerd[1723]: 2026-04-24 23:57:53.540 [INFO][5258] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:53.622029 containerd[1723]: 2026-04-24 23:57:53.549 [INFO][5258] ipam/ipam.go 526: Trying affinity for 192.168.54.128/26 host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:53.622029 containerd[1723]: 2026-04-24 23:57:53.552 [INFO][5258] ipam/ipam.go 160: Attempting to load block cidr=192.168.54.128/26 host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:53.622029 containerd[1723]: 2026-04-24 23:57:53.555 [INFO][5258] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.54.128/26 host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:53.622029 containerd[1723]: 2026-04-24 23:57:53.555 [INFO][5258] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.54.128/26 handle="k8s-pod-network.21269294092760e3b7798f276b30cfab07be1596bc323d2c0251aedb68af9f63" host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:53.622029 containerd[1723]: 2026-04-24 23:57:53.558 [INFO][5258] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.21269294092760e3b7798f276b30cfab07be1596bc323d2c0251aedb68af9f63 Apr 24 23:57:53.622029 containerd[1723]: 2026-04-24 23:57:53.565 [INFO][5258] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.54.128/26 handle="k8s-pod-network.21269294092760e3b7798f276b30cfab07be1596bc323d2c0251aedb68af9f63" host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:53.622029 containerd[1723]: 2026-04-24 23:57:53.577 [INFO][5258] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.54.132/26] block=192.168.54.128/26 handle="k8s-pod-network.21269294092760e3b7798f276b30cfab07be1596bc323d2c0251aedb68af9f63" host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:53.622029 containerd[1723]: 2026-04-24 23:57:53.577 [INFO][5258] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.54.132/26] handle="k8s-pod-network.21269294092760e3b7798f276b30cfab07be1596bc323d2c0251aedb68af9f63" host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:53.622029 containerd[1723]: 2026-04-24 23:57:53.577 [INFO][5258] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:57:53.622029 containerd[1723]: 2026-04-24 23:57:53.577 [INFO][5258] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.54.132/26] IPv6=[] ContainerID="21269294092760e3b7798f276b30cfab07be1596bc323d2c0251aedb68af9f63" HandleID="k8s-pod-network.21269294092760e3b7798f276b30cfab07be1596bc323d2c0251aedb68af9f63" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--j8278-eth0" Apr 24 23:57:53.624126 containerd[1723]: 2026-04-24 23:57:53.581 [INFO][5234] cni-plugin/k8s.go 418: Populated endpoint ContainerID="21269294092760e3b7798f276b30cfab07be1596bc323d2c0251aedb68af9f63" Namespace="calico-system" Pod="calico-apiserver-7876b86597-j8278" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--j8278-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--j8278-eth0", GenerateName:"calico-apiserver-7876b86597-", Namespace:"calico-system", SelfLink:"", UID:"4f4f44d0-9a79-44e1-a0dd-24732d17ab45", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 57, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7876b86597", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-b07cc1dc35", ContainerID:"", Pod:"calico-apiserver-7876b86597-j8278", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali5845e2d69ba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:57:53.624126 containerd[1723]: 2026-04-24 23:57:53.581 [INFO][5234] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.54.132/32] ContainerID="21269294092760e3b7798f276b30cfab07be1596bc323d2c0251aedb68af9f63" Namespace="calico-system" Pod="calico-apiserver-7876b86597-j8278" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--j8278-eth0" Apr 24 23:57:53.624126 containerd[1723]: 2026-04-24 23:57:53.581 [INFO][5234] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5845e2d69ba ContainerID="21269294092760e3b7798f276b30cfab07be1596bc323d2c0251aedb68af9f63" Namespace="calico-system" Pod="calico-apiserver-7876b86597-j8278" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--j8278-eth0" Apr 24 23:57:53.624126 containerd[1723]: 2026-04-24 23:57:53.585 [INFO][5234] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="21269294092760e3b7798f276b30cfab07be1596bc323d2c0251aedb68af9f63" Namespace="calico-system" Pod="calico-apiserver-7876b86597-j8278" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--j8278-eth0" Apr 24 23:57:53.624126 containerd[1723]: 2026-04-24 23:57:53.588 [INFO][5234] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="21269294092760e3b7798f276b30cfab07be1596bc323d2c0251aedb68af9f63" Namespace="calico-system" Pod="calico-apiserver-7876b86597-j8278" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--j8278-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--j8278-eth0", GenerateName:"calico-apiserver-7876b86597-", Namespace:"calico-system", SelfLink:"", UID:"4f4f44d0-9a79-44e1-a0dd-24732d17ab45", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 57, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7876b86597", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-b07cc1dc35", ContainerID:"21269294092760e3b7798f276b30cfab07be1596bc323d2c0251aedb68af9f63", Pod:"calico-apiserver-7876b86597-j8278", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali5845e2d69ba", MAC:"fa:87:17:66:a7:05", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:57:53.624126 containerd[1723]: 2026-04-24 23:57:53.617 [INFO][5234] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="21269294092760e3b7798f276b30cfab07be1596bc323d2c0251aedb68af9f63" Namespace="calico-system" Pod="calico-apiserver-7876b86597-j8278" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--j8278-eth0" Apr 24 23:57:53.650424 containerd[1723]: time="2026-04-24T23:57:53.649183419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4cgp8,Uid:d89c2d22-a648-4465-85aa-6b284aea19c0,Namespace:kube-system,Attempt:1,} returns sandbox id \"79bbbddd77b76c6e40512f58e561df084988b2f0029264eb7463b8018bb1d9c7\"" Apr 24 23:57:53.660949 containerd[1723]: time="2026-04-24T23:57:53.660708579Z" level=info msg="CreateContainer within sandbox \"79bbbddd77b76c6e40512f58e561df084988b2f0029264eb7463b8018bb1d9c7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 24 23:57:53.669159 containerd[1723]: time="2026-04-24T23:57:53.668887393Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:57:53.669159 containerd[1723]: time="2026-04-24T23:57:53.668962194Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:57:53.669159 containerd[1723]: time="2026-04-24T23:57:53.668981594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:57:53.669159 containerd[1723]: time="2026-04-24T23:57:53.669078496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:57:53.702003 containerd[1723]: time="2026-04-24T23:57:53.701260843Z" level=info msg="CreateContainer within sandbox \"79bbbddd77b76c6e40512f58e561df084988b2f0029264eb7463b8018bb1d9c7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"89d34464be94b5abfe6fa5c91f2d0709c654d972379d02f0ea0a02b7e768b917\"" Apr 24 23:57:53.704552 containerd[1723]: time="2026-04-24T23:57:53.702564561Z" level=info msg="StartContainer for \"89d34464be94b5abfe6fa5c91f2d0709c654d972379d02f0ea0a02b7e768b917\"" Apr 24 23:57:53.703571 systemd[1]: Started cri-containerd-21269294092760e3b7798f276b30cfab07be1596bc323d2c0251aedb68af9f63.scope - libcontainer container 21269294092760e3b7798f276b30cfab07be1596bc323d2c0251aedb68af9f63. Apr 24 23:57:53.762557 systemd[1]: Started cri-containerd-89d34464be94b5abfe6fa5c91f2d0709c654d972379d02f0ea0a02b7e768b917.scope - libcontainer container 89d34464be94b5abfe6fa5c91f2d0709c654d972379d02f0ea0a02b7e768b917. Apr 24 23:57:53.813728 containerd[1723]: time="2026-04-24T23:57:53.813596306Z" level=info msg="StartContainer for \"89d34464be94b5abfe6fa5c91f2d0709c654d972379d02f0ea0a02b7e768b917\" returns successfully" Apr 24 23:57:53.828079 containerd[1723]: time="2026-04-24T23:57:53.828026506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7876b86597-j8278,Uid:4f4f44d0-9a79-44e1-a0dd-24732d17ab45,Namespace:calico-system,Attempt:1,} returns sandbox id \"21269294092760e3b7798f276b30cfab07be1596bc323d2c0251aedb68af9f63\"" Apr 24 23:57:53.830553 containerd[1723]: time="2026-04-24T23:57:53.830524841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 24 23:57:54.140419 containerd[1723]: time="2026-04-24T23:57:54.140368550Z" level=info msg="StopPodSandbox for \"75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75\"" Apr 24 23:57:54.233136 containerd[1723]: 2026-04-24 23:57:54.186 [INFO][5441] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75" Apr 24 23:57:54.233136 containerd[1723]: 2026-04-24 23:57:54.186 [INFO][5441] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75" iface="eth0" netns="/var/run/netns/cni-c0641dc8-fbcc-554d-d4cf-781dcd7ff26e" Apr 24 23:57:54.233136 containerd[1723]: 2026-04-24 23:57:54.188 [INFO][5441] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75" iface="eth0" netns="/var/run/netns/cni-c0641dc8-fbcc-554d-d4cf-781dcd7ff26e" Apr 24 23:57:54.233136 containerd[1723]: 2026-04-24 23:57:54.189 [INFO][5441] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75" iface="eth0" netns="/var/run/netns/cni-c0641dc8-fbcc-554d-d4cf-781dcd7ff26e" Apr 24 23:57:54.233136 containerd[1723]: 2026-04-24 23:57:54.189 [INFO][5441] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75" Apr 24 23:57:54.233136 containerd[1723]: 2026-04-24 23:57:54.189 [INFO][5441] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75" Apr 24 23:57:54.233136 containerd[1723]: 2026-04-24 23:57:54.217 [INFO][5448] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75" HandleID="k8s-pod-network.75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-calico--kube--controllers--7fd8994f4c--c6q9c-eth0" Apr 24 23:57:54.233136 containerd[1723]: 2026-04-24 23:57:54.218 [INFO][5448] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:57:54.233136 containerd[1723]: 2026-04-24 23:57:54.218 [INFO][5448] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:57:54.233136 containerd[1723]: 2026-04-24 23:57:54.224 [WARNING][5448] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75" HandleID="k8s-pod-network.75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-calico--kube--controllers--7fd8994f4c--c6q9c-eth0" Apr 24 23:57:54.233136 containerd[1723]: 2026-04-24 23:57:54.224 [INFO][5448] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75" HandleID="k8s-pod-network.75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-calico--kube--controllers--7fd8994f4c--c6q9c-eth0" Apr 24 23:57:54.233136 containerd[1723]: 2026-04-24 23:57:54.230 [INFO][5448] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:57:54.233136 containerd[1723]: 2026-04-24 23:57:54.231 [INFO][5441] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75" Apr 24 23:57:54.233929 containerd[1723]: time="2026-04-24T23:57:54.233278942Z" level=info msg="TearDown network for sandbox \"75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75\" successfully" Apr 24 23:57:54.233929 containerd[1723]: time="2026-04-24T23:57:54.233306243Z" level=info msg="StopPodSandbox for \"75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75\" returns successfully" Apr 24 23:57:54.234313 containerd[1723]: time="2026-04-24T23:57:54.234272056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fd8994f4c-c6q9c,Uid:556622c8-9156-4147-b4ef-3b90cb6f4249,Namespace:calico-system,Attempt:1,}" Apr 24 23:57:54.268512 systemd[1]: run-netns-cni\x2dc0641dc8\x2dfbcc\x2d554d\x2dd4cf\x2d781dcd7ff26e.mount: Deactivated successfully. Apr 24 23:57:54.395182 systemd-networkd[1362]: cali78175e9dfa5: Link UP Apr 24 23:57:54.395489 systemd-networkd[1362]: cali78175e9dfa5: Gained carrier Apr 24 23:57:54.418305 containerd[1723]: 2026-04-24 23:57:54.302 [INFO][5455] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--b07cc1dc35-k8s-calico--kube--controllers--7fd8994f4c--c6q9c-eth0 calico-kube-controllers-7fd8994f4c- calico-system 556622c8-9156-4147-b4ef-3b90cb6f4249 1027 0 2026-04-24 23:57:05 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7fd8994f4c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-b07cc1dc35 calico-kube-controllers-7fd8994f4c-c6q9c eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali78175e9dfa5 [] [] }} ContainerID="d519a2d115f1a202a39547080cd3bdad897cde585ddcea7da30d90c42ccdac30" Namespace="calico-system" Pod="calico-kube-controllers-7fd8994f4c-c6q9c" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-calico--kube--controllers--7fd8994f4c--c6q9c-" Apr 24 23:57:54.418305 containerd[1723]: 2026-04-24 23:57:54.302 [INFO][5455] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d519a2d115f1a202a39547080cd3bdad897cde585ddcea7da30d90c42ccdac30" Namespace="calico-system" Pod="calico-kube-controllers-7fd8994f4c-c6q9c" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-calico--kube--controllers--7fd8994f4c--c6q9c-eth0" Apr 24 23:57:54.418305 containerd[1723]: 2026-04-24 23:57:54.346 [INFO][5467] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d519a2d115f1a202a39547080cd3bdad897cde585ddcea7da30d90c42ccdac30" HandleID="k8s-pod-network.d519a2d115f1a202a39547080cd3bdad897cde585ddcea7da30d90c42ccdac30" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-calico--kube--controllers--7fd8994f4c--c6q9c-eth0" Apr 24 23:57:54.418305 containerd[1723]: 2026-04-24 23:57:54.358 [INFO][5467] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d519a2d115f1a202a39547080cd3bdad897cde585ddcea7da30d90c42ccdac30" HandleID="k8s-pod-network.d519a2d115f1a202a39547080cd3bdad897cde585ddcea7da30d90c42ccdac30" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-calico--kube--controllers--7fd8994f4c--c6q9c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002773e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-b07cc1dc35", "pod":"calico-kube-controllers-7fd8994f4c-c6q9c", "timestamp":"2026-04-24 23:57:54.346371415 +0000 UTC"}, Hostname:"ci-4081.3.6-n-b07cc1dc35", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001ecf20)} Apr 24 23:57:54.418305 containerd[1723]: 2026-04-24 23:57:54.358 [INFO][5467] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:57:54.418305 containerd[1723]: 2026-04-24 23:57:54.358 [INFO][5467] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:57:54.418305 containerd[1723]: 2026-04-24 23:57:54.358 [INFO][5467] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-b07cc1dc35' Apr 24 23:57:54.418305 containerd[1723]: 2026-04-24 23:57:54.360 [INFO][5467] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d519a2d115f1a202a39547080cd3bdad897cde585ddcea7da30d90c42ccdac30" host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:54.418305 containerd[1723]: 2026-04-24 23:57:54.364 [INFO][5467] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:54.418305 containerd[1723]: 2026-04-24 23:57:54.368 [INFO][5467] ipam/ipam.go 526: Trying affinity for 192.168.54.128/26 host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:54.418305 containerd[1723]: 2026-04-24 23:57:54.370 [INFO][5467] ipam/ipam.go 160: Attempting to load block cidr=192.168.54.128/26 host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:54.418305 containerd[1723]: 2026-04-24 23:57:54.372 [INFO][5467] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.54.128/26 host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:54.418305 containerd[1723]: 2026-04-24 23:57:54.372 [INFO][5467] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.54.128/26 handle="k8s-pod-network.d519a2d115f1a202a39547080cd3bdad897cde585ddcea7da30d90c42ccdac30" host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:54.418305 containerd[1723]: 2026-04-24 23:57:54.373 [INFO][5467] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d519a2d115f1a202a39547080cd3bdad897cde585ddcea7da30d90c42ccdac30 Apr 24 23:57:54.418305 containerd[1723]: 2026-04-24 23:57:54.380 [INFO][5467] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.54.128/26 handle="k8s-pod-network.d519a2d115f1a202a39547080cd3bdad897cde585ddcea7da30d90c42ccdac30" host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:54.418305 containerd[1723]: 2026-04-24 23:57:54.389 [INFO][5467] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.54.133/26] block=192.168.54.128/26 handle="k8s-pod-network.d519a2d115f1a202a39547080cd3bdad897cde585ddcea7da30d90c42ccdac30" host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:54.418305 containerd[1723]: 2026-04-24 23:57:54.389 [INFO][5467] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.54.133/26] handle="k8s-pod-network.d519a2d115f1a202a39547080cd3bdad897cde585ddcea7da30d90c42ccdac30" host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:54.418305 containerd[1723]: 2026-04-24 23:57:54.389 [INFO][5467] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:57:54.418305 containerd[1723]: 2026-04-24 23:57:54.389 [INFO][5467] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.54.133/26] IPv6=[] ContainerID="d519a2d115f1a202a39547080cd3bdad897cde585ddcea7da30d90c42ccdac30" HandleID="k8s-pod-network.d519a2d115f1a202a39547080cd3bdad897cde585ddcea7da30d90c42ccdac30" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-calico--kube--controllers--7fd8994f4c--c6q9c-eth0" Apr 24 23:57:54.420624 containerd[1723]: 2026-04-24 23:57:54.391 [INFO][5455] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d519a2d115f1a202a39547080cd3bdad897cde585ddcea7da30d90c42ccdac30" Namespace="calico-system" Pod="calico-kube-controllers-7fd8994f4c-c6q9c" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-calico--kube--controllers--7fd8994f4c--c6q9c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--b07cc1dc35-k8s-calico--kube--controllers--7fd8994f4c--c6q9c-eth0", GenerateName:"calico-kube-controllers-7fd8994f4c-", Namespace:"calico-system", SelfLink:"", UID:"556622c8-9156-4147-b4ef-3b90cb6f4249", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 57, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fd8994f4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-b07cc1dc35", ContainerID:"", Pod:"calico-kube-controllers-7fd8994f4c-c6q9c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.54.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali78175e9dfa5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:57:54.420624 containerd[1723]: 2026-04-24 23:57:54.391 [INFO][5455] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.54.133/32] ContainerID="d519a2d115f1a202a39547080cd3bdad897cde585ddcea7da30d90c42ccdac30" Namespace="calico-system" Pod="calico-kube-controllers-7fd8994f4c-c6q9c" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-calico--kube--controllers--7fd8994f4c--c6q9c-eth0" Apr 24 23:57:54.420624 containerd[1723]: 2026-04-24 23:57:54.391 [INFO][5455] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali78175e9dfa5 ContainerID="d519a2d115f1a202a39547080cd3bdad897cde585ddcea7da30d90c42ccdac30" Namespace="calico-system" Pod="calico-kube-controllers-7fd8994f4c-c6q9c" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-calico--kube--controllers--7fd8994f4c--c6q9c-eth0" Apr 24 23:57:54.420624 containerd[1723]: 2026-04-24 23:57:54.394 [INFO][5455] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d519a2d115f1a202a39547080cd3bdad897cde585ddcea7da30d90c42ccdac30" Namespace="calico-system" Pod="calico-kube-controllers-7fd8994f4c-c6q9c" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-calico--kube--controllers--7fd8994f4c--c6q9c-eth0" Apr 24 23:57:54.420624 containerd[1723]: 2026-04-24 23:57:54.394 [INFO][5455] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d519a2d115f1a202a39547080cd3bdad897cde585ddcea7da30d90c42ccdac30" Namespace="calico-system" Pod="calico-kube-controllers-7fd8994f4c-c6q9c" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-calico--kube--controllers--7fd8994f4c--c6q9c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--b07cc1dc35-k8s-calico--kube--controllers--7fd8994f4c--c6q9c-eth0", GenerateName:"calico-kube-controllers-7fd8994f4c-", Namespace:"calico-system", SelfLink:"", UID:"556622c8-9156-4147-b4ef-3b90cb6f4249", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 57, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fd8994f4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-b07cc1dc35", ContainerID:"d519a2d115f1a202a39547080cd3bdad897cde585ddcea7da30d90c42ccdac30", Pod:"calico-kube-controllers-7fd8994f4c-c6q9c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.54.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali78175e9dfa5", MAC:"32:47:d9:95:ed:b2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:57:54.420624 containerd[1723]: 2026-04-24 23:57:54.415 [INFO][5455] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d519a2d115f1a202a39547080cd3bdad897cde585ddcea7da30d90c42ccdac30" Namespace="calico-system" Pod="calico-kube-controllers-7fd8994f4c-c6q9c" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-calico--kube--controllers--7fd8994f4c--c6q9c-eth0" Apr 24 23:57:54.457800 containerd[1723]: time="2026-04-24T23:57:54.457391059Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:57:54.457800 containerd[1723]: time="2026-04-24T23:57:54.457481260Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:57:54.457800 containerd[1723]: time="2026-04-24T23:57:54.457514561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:57:54.457800 containerd[1723]: time="2026-04-24T23:57:54.457651763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:57:54.506675 systemd[1]: Started cri-containerd-d519a2d115f1a202a39547080cd3bdad897cde585ddcea7da30d90c42ccdac30.scope - libcontainer container d519a2d115f1a202a39547080cd3bdad897cde585ddcea7da30d90c42ccdac30. Apr 24 23:57:54.535376 kubelet[3267]: I0424 23:57:54.534861 3267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-4cgp8" podStartSLOduration=63.534837236 podStartE2EDuration="1m3.534837236s" podCreationTimestamp="2026-04-24 23:56:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:57:54.505554929 +0000 UTC m=+70.466982452" watchObservedRunningTime="2026-04-24 23:57:54.534837236 +0000 UTC m=+70.496264759" Apr 24 23:57:54.605008 containerd[1723]: time="2026-04-24T23:57:54.604959411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fd8994f4c-c6q9c,Uid:556622c8-9156-4147-b4ef-3b90cb6f4249,Namespace:calico-system,Attempt:1,} returns sandbox id \"d519a2d115f1a202a39547080cd3bdad897cde585ddcea7da30d90c42ccdac30\"" Apr 24 23:57:54.963582 systemd-networkd[1362]: cali5845e2d69ba: Gained IPv6LL Apr 24 23:57:55.145633 containerd[1723]: time="2026-04-24T23:57:55.144818819Z" level=info msg="StopPodSandbox for \"31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85\"" Apr 24 23:57:55.150984 containerd[1723]: time="2026-04-24T23:57:55.150940004Z" level=info msg="StopPodSandbox for \"a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324\"" Apr 24 23:57:55.151597 containerd[1723]: time="2026-04-24T23:57:55.151563313Z" level=info msg="StopPodSandbox for \"dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42\"" Apr 24 23:57:55.354113 containerd[1723]: 2026-04-24 23:57:55.261 [INFO][5565] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324" Apr 24 23:57:55.354113 containerd[1723]: 2026-04-24 23:57:55.262 [INFO][5565] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324" iface="eth0" netns="/var/run/netns/cni-40ddc500-495d-43c0-7ce3-288503f8bd4b" Apr 24 23:57:55.354113 containerd[1723]: 2026-04-24 23:57:55.262 [INFO][5565] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324" iface="eth0" netns="/var/run/netns/cni-40ddc500-495d-43c0-7ce3-288503f8bd4b" Apr 24 23:57:55.354113 containerd[1723]: 2026-04-24 23:57:55.265 [INFO][5565] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324" iface="eth0" netns="/var/run/netns/cni-40ddc500-495d-43c0-7ce3-288503f8bd4b" Apr 24 23:57:55.354113 containerd[1723]: 2026-04-24 23:57:55.265 [INFO][5565] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324" Apr 24 23:57:55.354113 containerd[1723]: 2026-04-24 23:57:55.265 [INFO][5565] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324" Apr 24 23:57:55.354113 containerd[1723]: 2026-04-24 23:57:55.336 [INFO][5582] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324" HandleID="k8s-pod-network.a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--fghhg-eth0" Apr 24 23:57:55.354113 containerd[1723]: 2026-04-24 23:57:55.336 [INFO][5582] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:57:55.354113 containerd[1723]: 2026-04-24 23:57:55.336 [INFO][5582] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:57:55.354113 containerd[1723]: 2026-04-24 23:57:55.344 [WARNING][5582] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324" HandleID="k8s-pod-network.a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--fghhg-eth0" Apr 24 23:57:55.354113 containerd[1723]: 2026-04-24 23:57:55.344 [INFO][5582] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324" HandleID="k8s-pod-network.a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--fghhg-eth0" Apr 24 23:57:55.354113 containerd[1723]: 2026-04-24 23:57:55.347 [INFO][5582] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:57:55.354113 containerd[1723]: 2026-04-24 23:57:55.350 [INFO][5565] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324" Apr 24 23:57:55.359276 systemd[1]: run-netns-cni\x2d40ddc500\x2d495d\x2d43c0\x2d7ce3\x2d288503f8bd4b.mount: Deactivated successfully. Apr 24 23:57:55.360141 containerd[1723]: time="2026-04-24T23:57:55.359439004Z" level=info msg="TearDown network for sandbox \"a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324\" successfully" Apr 24 23:57:55.360141 containerd[1723]: time="2026-04-24T23:57:55.359479905Z" level=info msg="StopPodSandbox for \"a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324\" returns successfully" Apr 24 23:57:55.362053 containerd[1723]: time="2026-04-24T23:57:55.362011640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7876b86597-fghhg,Uid:4a31630e-98ec-43f7-b187-040b947d7c6b,Namespace:calico-system,Attempt:1,}" Apr 24 23:57:55.428537 containerd[1723]: 2026-04-24 23:57:55.290 [INFO][5566] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42" Apr 24 23:57:55.428537 containerd[1723]: 2026-04-24 23:57:55.290 [INFO][5566] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42" iface="eth0" netns="/var/run/netns/cni-9cfd558b-bef7-c843-53f2-f8684dab8d1a" Apr 24 23:57:55.428537 containerd[1723]: 2026-04-24 23:57:55.290 [INFO][5566] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42" iface="eth0" netns="/var/run/netns/cni-9cfd558b-bef7-c843-53f2-f8684dab8d1a" Apr 24 23:57:55.428537 containerd[1723]: 2026-04-24 23:57:55.291 [INFO][5566] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42" iface="eth0" netns="/var/run/netns/cni-9cfd558b-bef7-c843-53f2-f8684dab8d1a" Apr 24 23:57:55.428537 containerd[1723]: 2026-04-24 23:57:55.291 [INFO][5566] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42" Apr 24 23:57:55.428537 containerd[1723]: 2026-04-24 23:57:55.291 [INFO][5566] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42" Apr 24 23:57:55.428537 containerd[1723]: 2026-04-24 23:57:55.388 [INFO][5588] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42" HandleID="k8s-pod-network.dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-csi--node--driver--rwxs4-eth0" Apr 24 23:57:55.428537 containerd[1723]: 2026-04-24 23:57:55.388 [INFO][5588] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:57:55.428537 containerd[1723]: 2026-04-24 23:57:55.388 [INFO][5588] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:57:55.428537 containerd[1723]: 2026-04-24 23:57:55.410 [WARNING][5588] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42" HandleID="k8s-pod-network.dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-csi--node--driver--rwxs4-eth0" Apr 24 23:57:55.428537 containerd[1723]: 2026-04-24 23:57:55.411 [INFO][5588] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42" HandleID="k8s-pod-network.dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-csi--node--driver--rwxs4-eth0" Apr 24 23:57:55.428537 containerd[1723]: 2026-04-24 23:57:55.414 [INFO][5588] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:57:55.428537 containerd[1723]: 2026-04-24 23:57:55.417 [INFO][5566] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42" Apr 24 23:57:55.434055 containerd[1723]: time="2026-04-24T23:57:55.428682467Z" level=info msg="TearDown network for sandbox \"dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42\" successfully" Apr 24 23:57:55.434055 containerd[1723]: time="2026-04-24T23:57:55.428725768Z" level=info msg="StopPodSandbox for \"dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42\" returns successfully" Apr 24 23:57:55.434055 containerd[1723]: time="2026-04-24T23:57:55.430096387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rwxs4,Uid:ba5ea344-e24c-488b-ad7b-af64eecfd3fe,Namespace:calico-system,Attempt:1,}" Apr 24 23:57:55.441360 systemd[1]: run-netns-cni\x2d9cfd558b\x2dbef7\x2dc843\x2d53f2\x2df8684dab8d1a.mount: Deactivated successfully. Apr 24 23:57:55.463245 containerd[1723]: 2026-04-24 23:57:55.324 [INFO][5561] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85" Apr 24 23:57:55.463245 containerd[1723]: 2026-04-24 23:57:55.324 [INFO][5561] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85" iface="eth0" netns="/var/run/netns/cni-f4ffe863-d667-44b4-c7d0-8fd0f291f53e" Apr 24 23:57:55.463245 containerd[1723]: 2026-04-24 23:57:55.325 [INFO][5561] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85" iface="eth0" netns="/var/run/netns/cni-f4ffe863-d667-44b4-c7d0-8fd0f291f53e" Apr 24 23:57:55.463245 containerd[1723]: 2026-04-24 23:57:55.325 [INFO][5561] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85" iface="eth0" netns="/var/run/netns/cni-f4ffe863-d667-44b4-c7d0-8fd0f291f53e" Apr 24 23:57:55.463245 containerd[1723]: 2026-04-24 23:57:55.325 [INFO][5561] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85" Apr 24 23:57:55.463245 containerd[1723]: 2026-04-24 23:57:55.325 [INFO][5561] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85" Apr 24 23:57:55.463245 containerd[1723]: 2026-04-24 23:57:55.391 [INFO][5593] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85" HandleID="k8s-pod-network.31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-goldmane--5b85766d88--frn4w-eth0" Apr 24 23:57:55.463245 containerd[1723]: 2026-04-24 23:57:55.391 [INFO][5593] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:57:55.463245 containerd[1723]: 2026-04-24 23:57:55.415 [INFO][5593] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:57:55.463245 containerd[1723]: 2026-04-24 23:57:55.446 [WARNING][5593] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85" HandleID="k8s-pod-network.31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-goldmane--5b85766d88--frn4w-eth0" Apr 24 23:57:55.463245 containerd[1723]: 2026-04-24 23:57:55.446 [INFO][5593] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85" HandleID="k8s-pod-network.31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-goldmane--5b85766d88--frn4w-eth0" Apr 24 23:57:55.463245 containerd[1723]: 2026-04-24 23:57:55.449 [INFO][5593] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:57:55.463245 containerd[1723]: 2026-04-24 23:57:55.456 [INFO][5561] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85" Apr 24 23:57:55.465498 containerd[1723]: time="2026-04-24T23:57:55.464097760Z" level=info msg="TearDown network for sandbox \"31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85\" successfully" Apr 24 23:57:55.465498 containerd[1723]: time="2026-04-24T23:57:55.464241262Z" level=info msg="StopPodSandbox for \"31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85\" returns successfully" Apr 24 23:57:55.465969 containerd[1723]: time="2026-04-24T23:57:55.465941085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-frn4w,Uid:105586a9-fa42-4f9c-8b39-19852c899d53,Namespace:calico-system,Attempt:1,}" Apr 24 23:57:55.472168 systemd[1]: run-netns-cni\x2df4ffe863\x2dd667\x2d44b4\x2dc7d0\x2d8fd0f291f53e.mount: Deactivated successfully. Apr 24 23:57:55.475957 systemd-networkd[1362]: cali4efccf0c213: Gained IPv6LL Apr 24 23:57:55.726809 systemd-networkd[1362]: cali0f9c5c60fbe: Link UP Apr 24 23:57:55.728308 systemd-networkd[1362]: cali0f9c5c60fbe: Gained carrier Apr 24 23:57:55.746684 containerd[1723]: 2026-04-24 23:57:55.537 [INFO][5603] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--fghhg-eth0 calico-apiserver-7876b86597- calico-system 4a31630e-98ec-43f7-b187-040b947d7c6b 1044 0 2026-04-24 23:57:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7876b86597 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-b07cc1dc35 calico-apiserver-7876b86597-fghhg eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali0f9c5c60fbe [] [] }} ContainerID="aa70290fd56064444df4c476e94f7c000a83c603b9ba8fe15920fa06ff9f6e82" Namespace="calico-system" Pod="calico-apiserver-7876b86597-fghhg" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--fghhg-" Apr 24 23:57:55.746684 containerd[1723]: 2026-04-24 23:57:55.537 [INFO][5603] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aa70290fd56064444df4c476e94f7c000a83c603b9ba8fe15920fa06ff9f6e82" Namespace="calico-system" Pod="calico-apiserver-7876b86597-fghhg" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--fghhg-eth0" Apr 24 23:57:55.746684 containerd[1723]: 2026-04-24 23:57:55.647 [INFO][5638] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aa70290fd56064444df4c476e94f7c000a83c603b9ba8fe15920fa06ff9f6e82" HandleID="k8s-pod-network.aa70290fd56064444df4c476e94f7c000a83c603b9ba8fe15920fa06ff9f6e82" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--fghhg-eth0" Apr 24 23:57:55.746684 containerd[1723]: 2026-04-24 23:57:55.662 [INFO][5638] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="aa70290fd56064444df4c476e94f7c000a83c603b9ba8fe15920fa06ff9f6e82" HandleID="k8s-pod-network.aa70290fd56064444df4c476e94f7c000a83c603b9ba8fe15920fa06ff9f6e82" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--fghhg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000378370), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-b07cc1dc35", "pod":"calico-apiserver-7876b86597-fghhg", "timestamp":"2026-04-24 23:57:55.647295107 +0000 UTC"}, Hostname:"ci-4081.3.6-n-b07cc1dc35", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001882c0)} Apr 24 23:57:55.746684 containerd[1723]: 2026-04-24 23:57:55.663 [INFO][5638] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:57:55.746684 containerd[1723]: 2026-04-24 23:57:55.663 [INFO][5638] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:57:55.746684 containerd[1723]: 2026-04-24 23:57:55.663 [INFO][5638] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-b07cc1dc35' Apr 24 23:57:55.746684 containerd[1723]: 2026-04-24 23:57:55.667 [INFO][5638] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.aa70290fd56064444df4c476e94f7c000a83c603b9ba8fe15920fa06ff9f6e82" host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:55.746684 containerd[1723]: 2026-04-24 23:57:55.674 [INFO][5638] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:55.746684 containerd[1723]: 2026-04-24 23:57:55.682 [INFO][5638] ipam/ipam.go 526: Trying affinity for 192.168.54.128/26 host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:55.746684 containerd[1723]: 2026-04-24 23:57:55.686 [INFO][5638] ipam/ipam.go 160: Attempting to load block cidr=192.168.54.128/26 host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:55.746684 containerd[1723]: 2026-04-24 23:57:55.690 [INFO][5638] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.54.128/26 host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:55.746684 containerd[1723]: 2026-04-24 23:57:55.690 [INFO][5638] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.54.128/26 handle="k8s-pod-network.aa70290fd56064444df4c476e94f7c000a83c603b9ba8fe15920fa06ff9f6e82" host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:55.746684 containerd[1723]: 2026-04-24 23:57:55.693 [INFO][5638] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.aa70290fd56064444df4c476e94f7c000a83c603b9ba8fe15920fa06ff9f6e82 Apr 24 23:57:55.746684 containerd[1723]: 2026-04-24 23:57:55.700 [INFO][5638] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.54.128/26 handle="k8s-pod-network.aa70290fd56064444df4c476e94f7c000a83c603b9ba8fe15920fa06ff9f6e82" host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:55.746684 containerd[1723]: 2026-04-24 23:57:55.715 [INFO][5638] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.54.134/26] block=192.168.54.128/26 handle="k8s-pod-network.aa70290fd56064444df4c476e94f7c000a83c603b9ba8fe15920fa06ff9f6e82" host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:55.746684 containerd[1723]: 2026-04-24 23:57:55.715 [INFO][5638] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.54.134/26] handle="k8s-pod-network.aa70290fd56064444df4c476e94f7c000a83c603b9ba8fe15920fa06ff9f6e82" host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:55.746684 containerd[1723]: 2026-04-24 23:57:55.715 [INFO][5638] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:57:55.746684 containerd[1723]: 2026-04-24 23:57:55.715 [INFO][5638] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.54.134/26] IPv6=[] ContainerID="aa70290fd56064444df4c476e94f7c000a83c603b9ba8fe15920fa06ff9f6e82" HandleID="k8s-pod-network.aa70290fd56064444df4c476e94f7c000a83c603b9ba8fe15920fa06ff9f6e82" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--fghhg-eth0" Apr 24 23:57:55.748733 containerd[1723]: 2026-04-24 23:57:55.719 [INFO][5603] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aa70290fd56064444df4c476e94f7c000a83c603b9ba8fe15920fa06ff9f6e82" Namespace="calico-system" Pod="calico-apiserver-7876b86597-fghhg" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--fghhg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--fghhg-eth0", GenerateName:"calico-apiserver-7876b86597-", Namespace:"calico-system", SelfLink:"", UID:"4a31630e-98ec-43f7-b187-040b947d7c6b", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 57, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7876b86597", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-b07cc1dc35", ContainerID:"", Pod:"calico-apiserver-7876b86597-fghhg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0f9c5c60fbe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:57:55.748733 containerd[1723]: 2026-04-24 23:57:55.719 [INFO][5603] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.54.134/32] ContainerID="aa70290fd56064444df4c476e94f7c000a83c603b9ba8fe15920fa06ff9f6e82" Namespace="calico-system" Pod="calico-apiserver-7876b86597-fghhg" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--fghhg-eth0" Apr 24 23:57:55.748733 containerd[1723]: 2026-04-24 23:57:55.719 [INFO][5603] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0f9c5c60fbe ContainerID="aa70290fd56064444df4c476e94f7c000a83c603b9ba8fe15920fa06ff9f6e82" Namespace="calico-system" Pod="calico-apiserver-7876b86597-fghhg" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--fghhg-eth0" Apr 24 23:57:55.748733 containerd[1723]: 2026-04-24 23:57:55.730 [INFO][5603] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aa70290fd56064444df4c476e94f7c000a83c603b9ba8fe15920fa06ff9f6e82" Namespace="calico-system" Pod="calico-apiserver-7876b86597-fghhg" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--fghhg-eth0" Apr 24 23:57:55.748733 containerd[1723]: 2026-04-24 23:57:55.730 [INFO][5603] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aa70290fd56064444df4c476e94f7c000a83c603b9ba8fe15920fa06ff9f6e82" Namespace="calico-system" Pod="calico-apiserver-7876b86597-fghhg" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--fghhg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--fghhg-eth0", GenerateName:"calico-apiserver-7876b86597-", Namespace:"calico-system", SelfLink:"", UID:"4a31630e-98ec-43f7-b187-040b947d7c6b", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 57, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7876b86597", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-b07cc1dc35", ContainerID:"aa70290fd56064444df4c476e94f7c000a83c603b9ba8fe15920fa06ff9f6e82", Pod:"calico-apiserver-7876b86597-fghhg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0f9c5c60fbe", MAC:"0a:a3:d5:f1:14:14", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:57:55.748733 containerd[1723]: 2026-04-24 23:57:55.743 [INFO][5603] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aa70290fd56064444df4c476e94f7c000a83c603b9ba8fe15920fa06ff9f6e82" Namespace="calico-system" Pod="calico-apiserver-7876b86597-fghhg" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--fghhg-eth0" Apr 24 23:57:55.806782 systemd-networkd[1362]: cali1a30b1e0b64: Link UP Apr 24 23:57:55.807018 systemd-networkd[1362]: cali1a30b1e0b64: Gained carrier Apr 24 23:57:55.832919 containerd[1723]: 2026-04-24 23:57:55.591 [INFO][5614] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--b07cc1dc35-k8s-csi--node--driver--rwxs4-eth0 csi-node-driver- calico-system ba5ea344-e24c-488b-ad7b-af64eecfd3fe 1045 0 2026-04-24 23:57:05 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-b07cc1dc35 csi-node-driver-rwxs4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1a30b1e0b64 [] [] }} ContainerID="7913e2b355f0bbd2c83e1628f80213fbdf317bef45bce74aeb9546513797c97a" Namespace="calico-system" Pod="csi-node-driver-rwxs4" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-csi--node--driver--rwxs4-" Apr 24 23:57:55.832919 containerd[1723]: 2026-04-24 23:57:55.592 [INFO][5614] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7913e2b355f0bbd2c83e1628f80213fbdf317bef45bce74aeb9546513797c97a" Namespace="calico-system" Pod="csi-node-driver-rwxs4" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-csi--node--driver--rwxs4-eth0" Apr 24 23:57:55.832919 containerd[1723]: 2026-04-24 23:57:55.701 [INFO][5649] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7913e2b355f0bbd2c83e1628f80213fbdf317bef45bce74aeb9546513797c97a" HandleID="k8s-pod-network.7913e2b355f0bbd2c83e1628f80213fbdf317bef45bce74aeb9546513797c97a" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-csi--node--driver--rwxs4-eth0" Apr 24 23:57:55.832919 containerd[1723]: 2026-04-24 23:57:55.717 [INFO][5649] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7913e2b355f0bbd2c83e1628f80213fbdf317bef45bce74aeb9546513797c97a" HandleID="k8s-pod-network.7913e2b355f0bbd2c83e1628f80213fbdf317bef45bce74aeb9546513797c97a" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-csi--node--driver--rwxs4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fc7f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-b07cc1dc35", "pod":"csi-node-driver-rwxs4", "timestamp":"2026-04-24 23:57:55.701943167 +0000 UTC"}, Hostname:"ci-4081.3.6-n-b07cc1dc35", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000384420)} Apr 24 23:57:55.832919 containerd[1723]: 2026-04-24 23:57:55.717 [INFO][5649] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:57:55.832919 containerd[1723]: 2026-04-24 23:57:55.717 [INFO][5649] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:57:55.832919 containerd[1723]: 2026-04-24 23:57:55.717 [INFO][5649] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-b07cc1dc35' Apr 24 23:57:55.832919 containerd[1723]: 2026-04-24 23:57:55.767 [INFO][5649] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7913e2b355f0bbd2c83e1628f80213fbdf317bef45bce74aeb9546513797c97a" host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:55.832919 containerd[1723]: 2026-04-24 23:57:55.773 [INFO][5649] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:55.832919 containerd[1723]: 2026-04-24 23:57:55.780 [INFO][5649] ipam/ipam.go 526: Trying affinity for 192.168.54.128/26 host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:55.832919 containerd[1723]: 2026-04-24 23:57:55.782 [INFO][5649] ipam/ipam.go 160: Attempting to load block cidr=192.168.54.128/26 host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:55.832919 containerd[1723]: 2026-04-24 23:57:55.785 [INFO][5649] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.54.128/26 host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:55.832919 containerd[1723]: 2026-04-24 23:57:55.785 [INFO][5649] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.54.128/26 handle="k8s-pod-network.7913e2b355f0bbd2c83e1628f80213fbdf317bef45bce74aeb9546513797c97a" host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:55.832919 containerd[1723]: 2026-04-24 23:57:55.786 [INFO][5649] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7913e2b355f0bbd2c83e1628f80213fbdf317bef45bce74aeb9546513797c97a Apr 24 23:57:55.832919 containerd[1723]: 2026-04-24 23:57:55.793 [INFO][5649] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.54.128/26 handle="k8s-pod-network.7913e2b355f0bbd2c83e1628f80213fbdf317bef45bce74aeb9546513797c97a" host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:55.832919 containerd[1723]: 2026-04-24 23:57:55.801 [INFO][5649] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.54.135/26] block=192.168.54.128/26 handle="k8s-pod-network.7913e2b355f0bbd2c83e1628f80213fbdf317bef45bce74aeb9546513797c97a" host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:55.832919 containerd[1723]: 2026-04-24 23:57:55.801 [INFO][5649] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.54.135/26] handle="k8s-pod-network.7913e2b355f0bbd2c83e1628f80213fbdf317bef45bce74aeb9546513797c97a" host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:55.832919 containerd[1723]: 2026-04-24 23:57:55.801 [INFO][5649] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:57:55.832919 containerd[1723]: 2026-04-24 23:57:55.801 [INFO][5649] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.54.135/26] IPv6=[] ContainerID="7913e2b355f0bbd2c83e1628f80213fbdf317bef45bce74aeb9546513797c97a" HandleID="k8s-pod-network.7913e2b355f0bbd2c83e1628f80213fbdf317bef45bce74aeb9546513797c97a" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-csi--node--driver--rwxs4-eth0" Apr 24 23:57:55.834950 containerd[1723]: 2026-04-24 23:57:55.804 [INFO][5614] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7913e2b355f0bbd2c83e1628f80213fbdf317bef45bce74aeb9546513797c97a" Namespace="calico-system" Pod="csi-node-driver-rwxs4" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-csi--node--driver--rwxs4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--b07cc1dc35-k8s-csi--node--driver--rwxs4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ba5ea344-e24c-488b-ad7b-af64eecfd3fe", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 57, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-b07cc1dc35", ContainerID:"", Pod:"csi-node-driver-rwxs4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.54.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1a30b1e0b64", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:57:55.834950 containerd[1723]: 2026-04-24 23:57:55.804 [INFO][5614] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.54.135/32] ContainerID="7913e2b355f0bbd2c83e1628f80213fbdf317bef45bce74aeb9546513797c97a" Namespace="calico-system" Pod="csi-node-driver-rwxs4" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-csi--node--driver--rwxs4-eth0" Apr 24 23:57:55.834950 containerd[1723]: 2026-04-24 23:57:55.804 [INFO][5614] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1a30b1e0b64 ContainerID="7913e2b355f0bbd2c83e1628f80213fbdf317bef45bce74aeb9546513797c97a" Namespace="calico-system" Pod="csi-node-driver-rwxs4" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-csi--node--driver--rwxs4-eth0" Apr 24 23:57:55.834950 containerd[1723]: 2026-04-24 23:57:55.806 [INFO][5614] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7913e2b355f0bbd2c83e1628f80213fbdf317bef45bce74aeb9546513797c97a" Namespace="calico-system" Pod="csi-node-driver-rwxs4" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-csi--node--driver--rwxs4-eth0" Apr 24 23:57:55.834950 containerd[1723]: 2026-04-24 23:57:55.807 [INFO][5614] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7913e2b355f0bbd2c83e1628f80213fbdf317bef45bce74aeb9546513797c97a" Namespace="calico-system" Pod="csi-node-driver-rwxs4" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-csi--node--driver--rwxs4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--b07cc1dc35-k8s-csi--node--driver--rwxs4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ba5ea344-e24c-488b-ad7b-af64eecfd3fe", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 57, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-b07cc1dc35", ContainerID:"7913e2b355f0bbd2c83e1628f80213fbdf317bef45bce74aeb9546513797c97a", Pod:"csi-node-driver-rwxs4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.54.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1a30b1e0b64", MAC:"2e:9f:56:78:6d:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:57:55.834950 containerd[1723]: 2026-04-24 23:57:55.828 [INFO][5614] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7913e2b355f0bbd2c83e1628f80213fbdf317bef45bce74aeb9546513797c97a" Namespace="calico-system" Pod="csi-node-driver-rwxs4" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-csi--node--driver--rwxs4-eth0" Apr 24 23:57:55.917086 systemd-networkd[1362]: cali646b6379251: Link UP Apr 24 23:57:55.919135 systemd-networkd[1362]: cali646b6379251: Gained carrier Apr 24 23:57:55.954535 containerd[1723]: 2026-04-24 23:57:55.612 [INFO][5627] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--b07cc1dc35-k8s-goldmane--5b85766d88--frn4w-eth0 goldmane-5b85766d88- calico-system 105586a9-fa42-4f9c-8b39-19852c899d53 1047 0 2026-04-24 23:57:04 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-b07cc1dc35 goldmane-5b85766d88-frn4w eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali646b6379251 [] [] }} ContainerID="108b6d47dd846d1d1e6ae766b3661cf9ffe39a4648e637dce62b6061a608ab26" Namespace="calico-system" Pod="goldmane-5b85766d88-frn4w" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-goldmane--5b85766d88--frn4w-" Apr 24 23:57:55.954535 containerd[1723]: 2026-04-24 23:57:55.612 [INFO][5627] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="108b6d47dd846d1d1e6ae766b3661cf9ffe39a4648e637dce62b6061a608ab26" Namespace="calico-system" Pod="goldmane-5b85766d88-frn4w" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-goldmane--5b85766d88--frn4w-eth0" Apr 24 23:57:55.954535 containerd[1723]: 2026-04-24 23:57:55.700 [INFO][5654] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="108b6d47dd846d1d1e6ae766b3661cf9ffe39a4648e637dce62b6061a608ab26" HandleID="k8s-pod-network.108b6d47dd846d1d1e6ae766b3661cf9ffe39a4648e637dce62b6061a608ab26" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-goldmane--5b85766d88--frn4w-eth0" Apr 24 23:57:55.954535 containerd[1723]: 2026-04-24 23:57:55.720 [INFO][5654] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="108b6d47dd846d1d1e6ae766b3661cf9ffe39a4648e637dce62b6061a608ab26" HandleID="k8s-pod-network.108b6d47dd846d1d1e6ae766b3661cf9ffe39a4648e637dce62b6061a608ab26" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-goldmane--5b85766d88--frn4w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fea0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-b07cc1dc35", "pod":"goldmane-5b85766d88-frn4w", "timestamp":"2026-04-24 23:57:55.700031341 +0000 UTC"}, Hostname:"ci-4081.3.6-n-b07cc1dc35", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00038e580)} Apr 24 23:57:55.954535 containerd[1723]: 2026-04-24 23:57:55.720 [INFO][5654] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:57:55.954535 containerd[1723]: 2026-04-24 23:57:55.801 [INFO][5654] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:57:55.954535 containerd[1723]: 2026-04-24 23:57:55.801 [INFO][5654] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-b07cc1dc35' Apr 24 23:57:55.954535 containerd[1723]: 2026-04-24 23:57:55.868 [INFO][5654] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.108b6d47dd846d1d1e6ae766b3661cf9ffe39a4648e637dce62b6061a608ab26" host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:55.954535 containerd[1723]: 2026-04-24 23:57:55.874 [INFO][5654] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:55.954535 containerd[1723]: 2026-04-24 23:57:55.884 [INFO][5654] ipam/ipam.go 526: Trying affinity for 192.168.54.128/26 host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:55.954535 containerd[1723]: 2026-04-24 23:57:55.887 [INFO][5654] ipam/ipam.go 160: Attempting to load block cidr=192.168.54.128/26 host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:55.954535 containerd[1723]: 2026-04-24 23:57:55.890 [INFO][5654] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.54.128/26 host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:55.954535 containerd[1723]: 2026-04-24 23:57:55.890 [INFO][5654] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.54.128/26 handle="k8s-pod-network.108b6d47dd846d1d1e6ae766b3661cf9ffe39a4648e637dce62b6061a608ab26" host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:55.954535 containerd[1723]: 2026-04-24 23:57:55.891 [INFO][5654] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.108b6d47dd846d1d1e6ae766b3661cf9ffe39a4648e637dce62b6061a608ab26 Apr 24 23:57:55.954535 containerd[1723]: 2026-04-24 23:57:55.899 [INFO][5654] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.54.128/26 handle="k8s-pod-network.108b6d47dd846d1d1e6ae766b3661cf9ffe39a4648e637dce62b6061a608ab26" host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:55.954535 containerd[1723]: 2026-04-24 23:57:55.910 [INFO][5654] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.54.136/26] block=192.168.54.128/26 handle="k8s-pod-network.108b6d47dd846d1d1e6ae766b3661cf9ffe39a4648e637dce62b6061a608ab26" host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:55.954535 containerd[1723]: 2026-04-24 23:57:55.910 [INFO][5654] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.54.136/26] handle="k8s-pod-network.108b6d47dd846d1d1e6ae766b3661cf9ffe39a4648e637dce62b6061a608ab26" host="ci-4081.3.6-n-b07cc1dc35" Apr 24 23:57:55.954535 containerd[1723]: 2026-04-24 23:57:55.910 [INFO][5654] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:57:55.954535 containerd[1723]: 2026-04-24 23:57:55.910 [INFO][5654] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.54.136/26] IPv6=[] ContainerID="108b6d47dd846d1d1e6ae766b3661cf9ffe39a4648e637dce62b6061a608ab26" HandleID="k8s-pod-network.108b6d47dd846d1d1e6ae766b3661cf9ffe39a4648e637dce62b6061a608ab26" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-goldmane--5b85766d88--frn4w-eth0" Apr 24 23:57:55.956691 containerd[1723]: 2026-04-24 23:57:55.913 [INFO][5627] cni-plugin/k8s.go 418: Populated endpoint ContainerID="108b6d47dd846d1d1e6ae766b3661cf9ffe39a4648e637dce62b6061a608ab26" Namespace="calico-system" Pod="goldmane-5b85766d88-frn4w" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-goldmane--5b85766d88--frn4w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--b07cc1dc35-k8s-goldmane--5b85766d88--frn4w-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"105586a9-fa42-4f9c-8b39-19852c899d53", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 57, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-b07cc1dc35", ContainerID:"", Pod:"goldmane-5b85766d88-frn4w", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.54.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali646b6379251", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:57:55.956691 containerd[1723]: 2026-04-24 23:57:55.913 [INFO][5627] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.54.136/32] ContainerID="108b6d47dd846d1d1e6ae766b3661cf9ffe39a4648e637dce62b6061a608ab26" Namespace="calico-system" Pod="goldmane-5b85766d88-frn4w" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-goldmane--5b85766d88--frn4w-eth0" Apr 24 23:57:55.956691 containerd[1723]: 2026-04-24 23:57:55.913 [INFO][5627] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali646b6379251 ContainerID="108b6d47dd846d1d1e6ae766b3661cf9ffe39a4648e637dce62b6061a608ab26" Namespace="calico-system" Pod="goldmane-5b85766d88-frn4w" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-goldmane--5b85766d88--frn4w-eth0" Apr 24 23:57:55.956691 containerd[1723]: 2026-04-24 23:57:55.918 [INFO][5627] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="108b6d47dd846d1d1e6ae766b3661cf9ffe39a4648e637dce62b6061a608ab26" Namespace="calico-system" Pod="goldmane-5b85766d88-frn4w" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-goldmane--5b85766d88--frn4w-eth0" Apr 24 23:57:55.956691 containerd[1723]: 2026-04-24 23:57:55.919 [INFO][5627] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="108b6d47dd846d1d1e6ae766b3661cf9ffe39a4648e637dce62b6061a608ab26" Namespace="calico-system" Pod="goldmane-5b85766d88-frn4w" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-goldmane--5b85766d88--frn4w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--b07cc1dc35-k8s-goldmane--5b85766d88--frn4w-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"105586a9-fa42-4f9c-8b39-19852c899d53", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 57, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-b07cc1dc35", ContainerID:"108b6d47dd846d1d1e6ae766b3661cf9ffe39a4648e637dce62b6061a608ab26", Pod:"goldmane-5b85766d88-frn4w", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.54.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali646b6379251", MAC:"c6:48:d9:a7:83:d2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:57:55.956691 containerd[1723]: 2026-04-24 23:57:55.950 [INFO][5627] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="108b6d47dd846d1d1e6ae766b3661cf9ffe39a4648e637dce62b6061a608ab26" Namespace="calico-system" Pod="goldmane-5b85766d88-frn4w" WorkloadEndpoint="ci--4081.3.6--n--b07cc1dc35-k8s-goldmane--5b85766d88--frn4w-eth0" Apr 24 23:57:56.106103 containerd[1723]: time="2026-04-24T23:57:56.105712683Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:57:56.106103 containerd[1723]: time="2026-04-24T23:57:56.105785884Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:57:56.106103 containerd[1723]: time="2026-04-24T23:57:56.105831184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:57:56.106103 containerd[1723]: time="2026-04-24T23:57:56.105984786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:57:56.108209 containerd[1723]: time="2026-04-24T23:57:56.107488507Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:57:56.108209 containerd[1723]: time="2026-04-24T23:57:56.107537808Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:57:56.108209 containerd[1723]: time="2026-04-24T23:57:56.107571709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:57:56.108209 containerd[1723]: time="2026-04-24T23:57:56.107696010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:57:56.121887 containerd[1723]: time="2026-04-24T23:57:56.121724005Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:57:56.122125 containerd[1723]: time="2026-04-24T23:57:56.121844307Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:57:56.122125 containerd[1723]: time="2026-04-24T23:57:56.121872607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:57:56.122125 containerd[1723]: time="2026-04-24T23:57:56.122023310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:57:56.147734 systemd[1]: Started cri-containerd-aa70290fd56064444df4c476e94f7c000a83c603b9ba8fe15920fa06ff9f6e82.scope - libcontainer container aa70290fd56064444df4c476e94f7c000a83c603b9ba8fe15920fa06ff9f6e82. Apr 24 23:57:56.156221 systemd[1]: Started cri-containerd-7913e2b355f0bbd2c83e1628f80213fbdf317bef45bce74aeb9546513797c97a.scope - libcontainer container 7913e2b355f0bbd2c83e1628f80213fbdf317bef45bce74aeb9546513797c97a. Apr 24 23:57:56.171036 systemd[1]: Started cri-containerd-108b6d47dd846d1d1e6ae766b3661cf9ffe39a4648e637dce62b6061a608ab26.scope - libcontainer container 108b6d47dd846d1d1e6ae766b3661cf9ffe39a4648e637dce62b6061a608ab26. Apr 24 23:57:56.240407 containerd[1723]: time="2026-04-24T23:57:56.239208239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rwxs4,Uid:ba5ea344-e24c-488b-ad7b-af64eecfd3fe,Namespace:calico-system,Attempt:1,} returns sandbox id \"7913e2b355f0bbd2c83e1628f80213fbdf317bef45bce74aeb9546513797c97a\"" Apr 24 23:57:56.244544 systemd-networkd[1362]: cali78175e9dfa5: Gained IPv6LL Apr 24 23:57:56.302147 containerd[1723]: time="2026-04-24T23:57:56.301949412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-frn4w,Uid:105586a9-fa42-4f9c-8b39-19852c899d53,Namespace:calico-system,Attempt:1,} returns sandbox id \"108b6d47dd846d1d1e6ae766b3661cf9ffe39a4648e637dce62b6061a608ab26\"" Apr 24 23:57:56.306903 containerd[1723]: time="2026-04-24T23:57:56.306737578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7876b86597-fghhg,Uid:4a31630e-98ec-43f7-b187-040b947d7c6b,Namespace:calico-system,Attempt:1,} returns sandbox id \"aa70290fd56064444df4c476e94f7c000a83c603b9ba8fe15920fa06ff9f6e82\"" Apr 24 23:57:57.139525 systemd-networkd[1362]: cali1a30b1e0b64: Gained IPv6LL Apr 24 23:57:57.152555 containerd[1723]: time="2026-04-24T23:57:57.152500841Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:57:57.155371 containerd[1723]: time="2026-04-24T23:57:57.155238379Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 24 23:57:57.159223 containerd[1723]: time="2026-04-24T23:57:57.159135333Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:57:57.164445 containerd[1723]: time="2026-04-24T23:57:57.164409006Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:57:57.165946 containerd[1723]: time="2026-04-24T23:57:57.165287218Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 3.334571874s" Apr 24 23:57:57.165946 containerd[1723]: time="2026-04-24T23:57:57.165329219Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 24 23:57:57.167028 containerd[1723]: time="2026-04-24T23:57:57.166432234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 24 23:57:57.172645 containerd[1723]: time="2026-04-24T23:57:57.172612520Z" level=info msg="CreateContainer within sandbox \"21269294092760e3b7798f276b30cfab07be1596bc323d2c0251aedb68af9f63\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 24 23:57:57.203714 systemd-networkd[1362]: cali0f9c5c60fbe: Gained IPv6LL Apr 24 23:57:57.209334 containerd[1723]: time="2026-04-24T23:57:57.209083028Z" level=info msg="CreateContainer within sandbox \"21269294092760e3b7798f276b30cfab07be1596bc323d2c0251aedb68af9f63\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"88ae577658930a659bf7a2d7f82829905204206f4363071c2737f934f69bfe6e\"" Apr 24 23:57:57.210573 containerd[1723]: time="2026-04-24T23:57:57.210142842Z" level=info msg="StartContainer for \"88ae577658930a659bf7a2d7f82829905204206f4363071c2737f934f69bfe6e\"" Apr 24 23:57:57.260522 systemd[1]: Started cri-containerd-88ae577658930a659bf7a2d7f82829905204206f4363071c2737f934f69bfe6e.scope - libcontainer container 88ae577658930a659bf7a2d7f82829905204206f4363071c2737f934f69bfe6e. Apr 24 23:57:57.305799 containerd[1723]: time="2026-04-24T23:57:57.305649271Z" level=info msg="StartContainer for \"88ae577658930a659bf7a2d7f82829905204206f4363071c2737f934f69bfe6e\" returns successfully" Apr 24 23:57:57.971469 systemd-networkd[1362]: cali646b6379251: Gained IPv6LL Apr 24 23:57:58.515027 kubelet[3267]: I0424 23:57:58.514975 3267 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:58:00.288570 containerd[1723]: time="2026-04-24T23:58:00.288511854Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:58:00.291532 containerd[1723]: time="2026-04-24T23:58:00.291464895Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 24 23:58:00.294786 containerd[1723]: time="2026-04-24T23:58:00.294731840Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:58:00.299763 containerd[1723]: time="2026-04-24T23:58:00.299623408Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:58:00.300637 containerd[1723]: time="2026-04-24T23:58:00.300481520Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 3.134012585s" Apr 24 23:58:00.300637 containerd[1723]: time="2026-04-24T23:58:00.300522021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 24 23:58:00.302729 containerd[1723]: time="2026-04-24T23:58:00.302512849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 24 23:58:00.327529 containerd[1723]: time="2026-04-24T23:58:00.327308694Z" level=info msg="CreateContainer within sandbox \"d519a2d115f1a202a39547080cd3bdad897cde585ddcea7da30d90c42ccdac30\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 24 23:58:00.370883 containerd[1723]: time="2026-04-24T23:58:00.370834999Z" level=info msg="CreateContainer within sandbox \"d519a2d115f1a202a39547080cd3bdad897cde585ddcea7da30d90c42ccdac30\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"89454f17dae1ac4e7da87566181218ee3c5951c852b6b5b908f5506a03961eea\"" Apr 24 23:58:00.371607 containerd[1723]: time="2026-04-24T23:58:00.371544909Z" level=info msg="StartContainer for \"89454f17dae1ac4e7da87566181218ee3c5951c852b6b5b908f5506a03961eea\"" Apr 24 23:58:00.412810 systemd[1]: Started cri-containerd-89454f17dae1ac4e7da87566181218ee3c5951c852b6b5b908f5506a03961eea.scope - libcontainer container 89454f17dae1ac4e7da87566181218ee3c5951c852b6b5b908f5506a03961eea. Apr 24 23:58:00.457804 containerd[1723]: time="2026-04-24T23:58:00.457656206Z" level=info msg="StartContainer for \"89454f17dae1ac4e7da87566181218ee3c5951c852b6b5b908f5506a03961eea\" returns successfully" Apr 24 23:58:00.558847 kubelet[3267]: I0424 23:58:00.557616 3267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7876b86597-j8278" podStartSLOduration=53.221074394 podStartE2EDuration="56.557579696s" podCreationTimestamp="2026-04-24 23:57:04 +0000 UTC" firstStartedPulling="2026-04-24 23:57:53.82975393 +0000 UTC m=+69.791181453" lastFinishedPulling="2026-04-24 23:57:57.166259132 +0000 UTC m=+73.127686755" observedRunningTime="2026-04-24 23:57:57.537519295 +0000 UTC m=+73.498946918" watchObservedRunningTime="2026-04-24 23:58:00.557579696 +0000 UTC m=+76.519007319" Apr 24 23:58:00.609531 kubelet[3267]: I0424 23:58:00.609326 3267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7fd8994f4c-c6q9c" podStartSLOduration=49.91410271 podStartE2EDuration="55.609277315s" podCreationTimestamp="2026-04-24 23:57:05 +0000 UTC" firstStartedPulling="2026-04-24 23:57:54.606476332 +0000 UTC m=+70.567903855" lastFinishedPulling="2026-04-24 23:58:00.301650937 +0000 UTC m=+76.263078460" observedRunningTime="2026-04-24 23:58:00.558738412 +0000 UTC m=+76.520166035" watchObservedRunningTime="2026-04-24 23:58:00.609277315 +0000 UTC m=+76.570704938" Apr 24 23:58:01.827817 containerd[1723]: time="2026-04-24T23:58:01.827751239Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:58:01.830490 containerd[1723]: time="2026-04-24T23:58:01.830318074Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 24 23:58:01.833603 containerd[1723]: time="2026-04-24T23:58:01.833530418Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:58:01.837878 containerd[1723]: time="2026-04-24T23:58:01.837817777Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:58:01.838890 containerd[1723]: time="2026-04-24T23:58:01.838603388Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.536050539s" Apr 24 23:58:01.838890 containerd[1723]: time="2026-04-24T23:58:01.838644489Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 24 23:58:01.839994 containerd[1723]: time="2026-04-24T23:58:01.839944107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 24 23:58:01.846219 containerd[1723]: time="2026-04-24T23:58:01.846181492Z" level=info msg="CreateContainer within sandbox \"7913e2b355f0bbd2c83e1628f80213fbdf317bef45bce74aeb9546513797c97a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 24 23:58:01.888719 containerd[1723]: time="2026-04-24T23:58:01.888660377Z" level=info msg="CreateContainer within sandbox \"7913e2b355f0bbd2c83e1628f80213fbdf317bef45bce74aeb9546513797c97a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c432ea53675fe3b94c3563ca6e55b8b38507b238743a6234178445e38595602f\"" Apr 24 23:58:01.889628 containerd[1723]: time="2026-04-24T23:58:01.889494989Z" level=info msg="StartContainer for \"c432ea53675fe3b94c3563ca6e55b8b38507b238743a6234178445e38595602f\"" Apr 24 23:58:01.930735 systemd[1]: Started cri-containerd-c432ea53675fe3b94c3563ca6e55b8b38507b238743a6234178445e38595602f.scope - libcontainer container c432ea53675fe3b94c3563ca6e55b8b38507b238743a6234178445e38595602f. Apr 24 23:58:01.970205 containerd[1723]: time="2026-04-24T23:58:01.970152700Z" level=info msg="StartContainer for \"c432ea53675fe3b94c3563ca6e55b8b38507b238743a6234178445e38595602f\" returns successfully" Apr 24 23:58:04.468422 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount999240328.mount: Deactivated successfully. Apr 24 23:58:04.994580 containerd[1723]: time="2026-04-24T23:58:04.994520552Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:58:04.997555 containerd[1723]: time="2026-04-24T23:58:04.997310191Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 24 23:58:05.001945 containerd[1723]: time="2026-04-24T23:58:05.001080143Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:58:05.008999 containerd[1723]: time="2026-04-24T23:58:05.008959551Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:58:05.011080 containerd[1723]: time="2026-04-24T23:58:05.011042280Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 3.171065973s" Apr 24 23:58:05.011443 containerd[1723]: time="2026-04-24T23:58:05.011405385Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 24 23:58:05.014169 containerd[1723]: time="2026-04-24T23:58:05.013973420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 24 23:58:05.034747 containerd[1723]: time="2026-04-24T23:58:05.034712506Z" level=info msg="CreateContainer within sandbox \"108b6d47dd846d1d1e6ae766b3661cf9ffe39a4648e637dce62b6061a608ab26\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 24 23:58:05.071148 containerd[1723]: time="2026-04-24T23:58:05.071101207Z" level=info msg="CreateContainer within sandbox \"108b6d47dd846d1d1e6ae766b3661cf9ffe39a4648e637dce62b6061a608ab26\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"7b85262b30694679016d604dfa7947f20880ac491d080b16ea3db87a4cff3e8f\"" Apr 24 23:58:05.072534 containerd[1723]: time="2026-04-24T23:58:05.071875218Z" level=info msg="StartContainer for \"7b85262b30694679016d604dfa7947f20880ac491d080b16ea3db87a4cff3e8f\"" Apr 24 23:58:05.111506 systemd[1]: Started cri-containerd-7b85262b30694679016d604dfa7947f20880ac491d080b16ea3db87a4cff3e8f.scope - libcontainer container 7b85262b30694679016d604dfa7947f20880ac491d080b16ea3db87a4cff3e8f. Apr 24 23:58:05.158629 containerd[1723]: time="2026-04-24T23:58:05.158313908Z" level=info msg="StartContainer for \"7b85262b30694679016d604dfa7947f20880ac491d080b16ea3db87a4cff3e8f\" returns successfully" Apr 24 23:58:05.322325 containerd[1723]: time="2026-04-24T23:58:05.322174165Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:58:05.325637 containerd[1723]: time="2026-04-24T23:58:05.325190806Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 24 23:58:05.327440 containerd[1723]: time="2026-04-24T23:58:05.327392837Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 313.383516ms" Apr 24 23:58:05.327440 containerd[1723]: time="2026-04-24T23:58:05.327433837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 24 23:58:05.329040 containerd[1723]: time="2026-04-24T23:58:05.328552553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 24 23:58:05.335940 containerd[1723]: time="2026-04-24T23:58:05.335899454Z" level=info msg="CreateContainer within sandbox \"aa70290fd56064444df4c476e94f7c000a83c603b9ba8fe15920fa06ff9f6e82\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 24 23:58:05.377467 containerd[1723]: time="2026-04-24T23:58:05.377413726Z" level=info msg="CreateContainer within sandbox \"aa70290fd56064444df4c476e94f7c000a83c603b9ba8fe15920fa06ff9f6e82\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2d15f81cc22911711b0edb2b5bb5f5a4d503ed40ad637051e27b265f70edb97d\"" Apr 24 23:58:05.378449 containerd[1723]: time="2026-04-24T23:58:05.378276438Z" level=info msg="StartContainer for \"2d15f81cc22911711b0edb2b5bb5f5a4d503ed40ad637051e27b265f70edb97d\"" Apr 24 23:58:05.409529 systemd[1]: Started cri-containerd-2d15f81cc22911711b0edb2b5bb5f5a4d503ed40ad637051e27b265f70edb97d.scope - libcontainer container 2d15f81cc22911711b0edb2b5bb5f5a4d503ed40ad637051e27b265f70edb97d. Apr 24 23:58:05.472543 containerd[1723]: time="2026-04-24T23:58:05.472304933Z" level=info msg="StartContainer for \"2d15f81cc22911711b0edb2b5bb5f5a4d503ed40ad637051e27b265f70edb97d\" returns successfully" Apr 24 23:58:05.598375 kubelet[3267]: I0424 23:58:05.595138 3267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-frn4w" podStartSLOduration=52.887375975 podStartE2EDuration="1m1.595109924s" podCreationTimestamp="2026-04-24 23:57:04 +0000 UTC" firstStartedPulling="2026-04-24 23:57:56.305997968 +0000 UTC m=+72.267425491" lastFinishedPulling="2026-04-24 23:58:05.013731917 +0000 UTC m=+80.975159440" observedRunningTime="2026-04-24 23:58:05.572742316 +0000 UTC m=+81.534169939" watchObservedRunningTime="2026-04-24 23:58:05.595109924 +0000 UTC m=+81.556537547" Apr 24 23:58:06.649948 systemd[1]: run-containerd-runc-k8s.io-7b85262b30694679016d604dfa7947f20880ac491d080b16ea3db87a4cff3e8f-runc.Oo7ILp.mount: Deactivated successfully. Apr 24 23:58:07.212610 kubelet[3267]: I0424 23:58:07.212533 3267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7876b86597-fghhg" podStartSLOduration=54.193059958 podStartE2EDuration="1m3.212506999s" podCreationTimestamp="2026-04-24 23:57:04 +0000 UTC" firstStartedPulling="2026-04-24 23:57:56.308924509 +0000 UTC m=+72.270352132" lastFinishedPulling="2026-04-24 23:58:05.32837165 +0000 UTC m=+81.289799173" observedRunningTime="2026-04-24 23:58:05.599575385 +0000 UTC m=+81.561003008" watchObservedRunningTime="2026-04-24 23:58:07.212506999 +0000 UTC m=+83.173934522" Apr 24 23:58:07.332922 containerd[1723]: time="2026-04-24T23:58:07.332770555Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:58:07.335960 containerd[1723]: time="2026-04-24T23:58:07.335910899Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 24 23:58:07.340369 containerd[1723]: time="2026-04-24T23:58:07.339521948Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:58:07.348393 containerd[1723]: time="2026-04-24T23:58:07.348363470Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:58:07.350901 containerd[1723]: time="2026-04-24T23:58:07.350867905Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 2.022278452s" Apr 24 23:58:07.351029 containerd[1723]: time="2026-04-24T23:58:07.351012207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 24 23:58:07.360597 containerd[1723]: time="2026-04-24T23:58:07.360558838Z" level=info msg="CreateContainer within sandbox \"7913e2b355f0bbd2c83e1628f80213fbdf317bef45bce74aeb9546513797c97a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 24 23:58:07.395685 containerd[1723]: time="2026-04-24T23:58:07.395637721Z" level=info msg="CreateContainer within sandbox \"7913e2b355f0bbd2c83e1628f80213fbdf317bef45bce74aeb9546513797c97a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"9638a87b71070771504fd6c06de1f4c754a9e071dab615fdc352bc6de54c0674\"" Apr 24 23:58:07.396794 containerd[1723]: time="2026-04-24T23:58:07.396750637Z" level=info msg="StartContainer for \"9638a87b71070771504fd6c06de1f4c754a9e071dab615fdc352bc6de54c0674\"" Apr 24 23:58:07.440520 systemd[1]: Started cri-containerd-9638a87b71070771504fd6c06de1f4c754a9e071dab615fdc352bc6de54c0674.scope - libcontainer container 9638a87b71070771504fd6c06de1f4c754a9e071dab615fdc352bc6de54c0674. Apr 24 23:58:07.472591 containerd[1723]: time="2026-04-24T23:58:07.471713869Z" level=info msg="StartContainer for \"9638a87b71070771504fd6c06de1f4c754a9e071dab615fdc352bc6de54c0674\" returns successfully" Apr 24 23:58:08.244838 kubelet[3267]: I0424 23:58:08.244794 3267 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 24 23:58:08.245374 kubelet[3267]: I0424 23:58:08.244855 3267 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 24 23:58:12.521239 kubelet[3267]: I0424 23:58:12.521163 3267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-rwxs4" podStartSLOduration=56.412106239 podStartE2EDuration="1m7.521139968s" podCreationTimestamp="2026-04-24 23:57:05 +0000 UTC" firstStartedPulling="2026-04-24 23:57:56.24286639 +0000 UTC m=+72.204294013" lastFinishedPulling="2026-04-24 23:58:07.351900219 +0000 UTC m=+83.313327742" observedRunningTime="2026-04-24 23:58:07.596191583 +0000 UTC m=+83.557619206" watchObservedRunningTime="2026-04-24 23:58:12.521139968 +0000 UTC m=+88.482567591" Apr 24 23:58:27.880071 kubelet[3267]: I0424 23:58:27.879638 3267 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:58:30.553143 systemd[1]: run-containerd-runc-k8s.io-89454f17dae1ac4e7da87566181218ee3c5951c852b6b5b908f5506a03961eea-runc.gBHoT6.mount: Deactivated successfully. Apr 24 23:58:37.595838 systemd[1]: run-containerd-runc-k8s.io-7b85262b30694679016d604dfa7947f20880ac491d080b16ea3db87a4cff3e8f-runc.xW0VeG.mount: Deactivated successfully. Apr 24 23:58:44.306146 containerd[1723]: time="2026-04-24T23:58:44.306098391Z" level=info msg="StopPodSandbox for \"169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de\"" Apr 24 23:58:44.372712 containerd[1723]: 2026-04-24 23:58:44.340 [WARNING][6364] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--j8278-eth0", GenerateName:"calico-apiserver-7876b86597-", Namespace:"calico-system", SelfLink:"", UID:"4f4f44d0-9a79-44e1-a0dd-24732d17ab45", ResourceVersion:"1185", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 57, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7876b86597", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-b07cc1dc35", ContainerID:"21269294092760e3b7798f276b30cfab07be1596bc323d2c0251aedb68af9f63", Pod:"calico-apiserver-7876b86597-j8278", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali5845e2d69ba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:58:44.372712 containerd[1723]: 2026-04-24 23:58:44.340 [INFO][6364] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de" Apr 24 23:58:44.372712 containerd[1723]: 2026-04-24 23:58:44.340 [INFO][6364] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de" iface="eth0" netns="" Apr 24 23:58:44.372712 containerd[1723]: 2026-04-24 23:58:44.340 [INFO][6364] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de" Apr 24 23:58:44.372712 containerd[1723]: 2026-04-24 23:58:44.340 [INFO][6364] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de" Apr 24 23:58:44.372712 containerd[1723]: 2026-04-24 23:58:44.362 [INFO][6371] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de" HandleID="k8s-pod-network.169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--j8278-eth0" Apr 24 23:58:44.372712 containerd[1723]: 2026-04-24 23:58:44.362 [INFO][6371] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:58:44.372712 containerd[1723]: 2026-04-24 23:58:44.362 [INFO][6371] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:58:44.372712 containerd[1723]: 2026-04-24 23:58:44.368 [WARNING][6371] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de" HandleID="k8s-pod-network.169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--j8278-eth0" Apr 24 23:58:44.372712 containerd[1723]: 2026-04-24 23:58:44.368 [INFO][6371] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de" HandleID="k8s-pod-network.169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--j8278-eth0" Apr 24 23:58:44.372712 containerd[1723]: 2026-04-24 23:58:44.369 [INFO][6371] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:58:44.372712 containerd[1723]: 2026-04-24 23:58:44.371 [INFO][6364] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de" Apr 24 23:58:44.373491 containerd[1723]: time="2026-04-24T23:58:44.372743118Z" level=info msg="TearDown network for sandbox \"169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de\" successfully" Apr 24 23:58:44.373491 containerd[1723]: time="2026-04-24T23:58:44.372775918Z" level=info msg="StopPodSandbox for \"169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de\" returns successfully" Apr 24 23:58:44.373491 containerd[1723]: time="2026-04-24T23:58:44.373329826Z" level=info msg="RemovePodSandbox for \"169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de\"" Apr 24 23:58:44.373491 containerd[1723]: time="2026-04-24T23:58:44.373388027Z" level=info msg="Forcibly stopping sandbox \"169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de\"" Apr 24 23:58:44.437081 containerd[1723]: 2026-04-24 23:58:44.406 [WARNING][6385] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--j8278-eth0", GenerateName:"calico-apiserver-7876b86597-", Namespace:"calico-system", SelfLink:"", UID:"4f4f44d0-9a79-44e1-a0dd-24732d17ab45", ResourceVersion:"1185", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 57, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7876b86597", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-b07cc1dc35", ContainerID:"21269294092760e3b7798f276b30cfab07be1596bc323d2c0251aedb68af9f63", Pod:"calico-apiserver-7876b86597-j8278", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali5845e2d69ba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:58:44.437081 containerd[1723]: 2026-04-24 23:58:44.407 [INFO][6385] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de" Apr 24 23:58:44.437081 containerd[1723]: 2026-04-24 23:58:44.407 [INFO][6385] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de" iface="eth0" netns="" Apr 24 23:58:44.437081 containerd[1723]: 2026-04-24 23:58:44.407 [INFO][6385] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de" Apr 24 23:58:44.437081 containerd[1723]: 2026-04-24 23:58:44.407 [INFO][6385] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de" Apr 24 23:58:44.437081 containerd[1723]: 2026-04-24 23:58:44.427 [INFO][6392] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de" HandleID="k8s-pod-network.169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--j8278-eth0" Apr 24 23:58:44.437081 containerd[1723]: 2026-04-24 23:58:44.427 [INFO][6392] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:58:44.437081 containerd[1723]: 2026-04-24 23:58:44.427 [INFO][6392] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:58:44.437081 containerd[1723]: 2026-04-24 23:58:44.433 [WARNING][6392] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de" HandleID="k8s-pod-network.169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--j8278-eth0" Apr 24 23:58:44.437081 containerd[1723]: 2026-04-24 23:58:44.433 [INFO][6392] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de" HandleID="k8s-pod-network.169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--j8278-eth0" Apr 24 23:58:44.437081 containerd[1723]: 2026-04-24 23:58:44.434 [INFO][6392] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:58:44.437081 containerd[1723]: 2026-04-24 23:58:44.435 [INFO][6385] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de" Apr 24 23:58:44.437802 containerd[1723]: time="2026-04-24T23:58:44.437127713Z" level=info msg="TearDown network for sandbox \"169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de\" successfully" Apr 24 23:58:44.449762 containerd[1723]: time="2026-04-24T23:58:44.449714488Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:58:44.450012 containerd[1723]: time="2026-04-24T23:58:44.449818390Z" level=info msg="RemovePodSandbox \"169e08285e559bb274d2f2cf6d11b4f5fdddd1d7e3487b1384de81490c1585de\" returns successfully" Apr 24 23:58:44.450539 containerd[1723]: time="2026-04-24T23:58:44.450488099Z" level=info msg="StopPodSandbox for \"dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42\"" Apr 24 23:58:44.530991 containerd[1723]: 2026-04-24 23:58:44.486 [WARNING][6406] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--b07cc1dc35-k8s-csi--node--driver--rwxs4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ba5ea344-e24c-488b-ad7b-af64eecfd3fe", ResourceVersion:"1129", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 57, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-b07cc1dc35", ContainerID:"7913e2b355f0bbd2c83e1628f80213fbdf317bef45bce74aeb9546513797c97a", Pod:"csi-node-driver-rwxs4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.54.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1a30b1e0b64", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:58:44.530991 containerd[1723]: 2026-04-24 23:58:44.486 [INFO][6406] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42" Apr 24 23:58:44.530991 containerd[1723]: 2026-04-24 23:58:44.486 [INFO][6406] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42" iface="eth0" netns="" Apr 24 23:58:44.530991 containerd[1723]: 2026-04-24 23:58:44.486 [INFO][6406] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42" Apr 24 23:58:44.530991 containerd[1723]: 2026-04-24 23:58:44.486 [INFO][6406] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42" Apr 24 23:58:44.530991 containerd[1723]: 2026-04-24 23:58:44.513 [INFO][6414] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42" HandleID="k8s-pod-network.dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-csi--node--driver--rwxs4-eth0" Apr 24 23:58:44.530991 containerd[1723]: 2026-04-24 23:58:44.513 [INFO][6414] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:58:44.530991 containerd[1723]: 2026-04-24 23:58:44.513 [INFO][6414] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:58:44.530991 containerd[1723]: 2026-04-24 23:58:44.524 [WARNING][6414] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42" HandleID="k8s-pod-network.dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-csi--node--driver--rwxs4-eth0" Apr 24 23:58:44.530991 containerd[1723]: 2026-04-24 23:58:44.524 [INFO][6414] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42" HandleID="k8s-pod-network.dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-csi--node--driver--rwxs4-eth0" Apr 24 23:58:44.530991 containerd[1723]: 2026-04-24 23:58:44.526 [INFO][6414] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:58:44.530991 containerd[1723]: 2026-04-24 23:58:44.528 [INFO][6406] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42" Apr 24 23:58:44.533309 containerd[1723]: time="2026-04-24T23:58:44.531047020Z" level=info msg="TearDown network for sandbox \"dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42\" successfully" Apr 24 23:58:44.533309 containerd[1723]: time="2026-04-24T23:58:44.531075020Z" level=info msg="StopPodSandbox for \"dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42\" returns successfully" Apr 24 23:58:44.533309 containerd[1723]: time="2026-04-24T23:58:44.531534826Z" level=info msg="RemovePodSandbox for \"dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42\"" Apr 24 23:58:44.533309 containerd[1723]: time="2026-04-24T23:58:44.531597827Z" level=info msg="Forcibly stopping sandbox \"dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42\"" Apr 24 23:58:44.602125 containerd[1723]: 2026-04-24 23:58:44.570 [WARNING][6428] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--b07cc1dc35-k8s-csi--node--driver--rwxs4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ba5ea344-e24c-488b-ad7b-af64eecfd3fe", ResourceVersion:"1129", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 57, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-b07cc1dc35", ContainerID:"7913e2b355f0bbd2c83e1628f80213fbdf317bef45bce74aeb9546513797c97a", Pod:"csi-node-driver-rwxs4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.54.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1a30b1e0b64", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:58:44.602125 containerd[1723]: 2026-04-24 23:58:44.570 [INFO][6428] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42" Apr 24 23:58:44.602125 containerd[1723]: 2026-04-24 23:58:44.570 [INFO][6428] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42" iface="eth0" netns="" Apr 24 23:58:44.602125 containerd[1723]: 2026-04-24 23:58:44.570 [INFO][6428] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42" Apr 24 23:58:44.602125 containerd[1723]: 2026-04-24 23:58:44.570 [INFO][6428] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42" Apr 24 23:58:44.602125 containerd[1723]: 2026-04-24 23:58:44.592 [INFO][6436] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42" HandleID="k8s-pod-network.dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-csi--node--driver--rwxs4-eth0" Apr 24 23:58:44.602125 containerd[1723]: 2026-04-24 23:58:44.592 [INFO][6436] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:58:44.602125 containerd[1723]: 2026-04-24 23:58:44.592 [INFO][6436] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:58:44.602125 containerd[1723]: 2026-04-24 23:58:44.597 [WARNING][6436] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42" HandleID="k8s-pod-network.dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-csi--node--driver--rwxs4-eth0" Apr 24 23:58:44.602125 containerd[1723]: 2026-04-24 23:58:44.598 [INFO][6436] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42" HandleID="k8s-pod-network.dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-csi--node--driver--rwxs4-eth0" Apr 24 23:58:44.602125 containerd[1723]: 2026-04-24 23:58:44.599 [INFO][6436] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:58:44.602125 containerd[1723]: 2026-04-24 23:58:44.600 [INFO][6428] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42" Apr 24 23:58:44.602809 containerd[1723]: time="2026-04-24T23:58:44.602159409Z" level=info msg="TearDown network for sandbox \"dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42\" successfully" Apr 24 23:58:44.610390 containerd[1723]: time="2026-04-24T23:58:44.610317622Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:58:44.610512 containerd[1723]: time="2026-04-24T23:58:44.610443624Z" level=info msg="RemovePodSandbox \"dc50d1297ffc2f12a92af03618244ca518eb2718540c72d00bf2991c7586af42\" returns successfully" Apr 24 23:58:44.610983 containerd[1723]: time="2026-04-24T23:58:44.610951231Z" level=info msg="StopPodSandbox for \"31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85\"" Apr 24 23:58:44.684381 containerd[1723]: 2026-04-24 23:58:44.646 [WARNING][6450] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--b07cc1dc35-k8s-goldmane--5b85766d88--frn4w-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"105586a9-fa42-4f9c-8b39-19852c899d53", ResourceVersion:"1216", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 57, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-b07cc1dc35", ContainerID:"108b6d47dd846d1d1e6ae766b3661cf9ffe39a4648e637dce62b6061a608ab26", Pod:"goldmane-5b85766d88-frn4w", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.54.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali646b6379251", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:58:44.684381 containerd[1723]: 2026-04-24 23:58:44.646 [INFO][6450] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85" Apr 24 23:58:44.684381 containerd[1723]: 2026-04-24 23:58:44.646 [INFO][6450] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85" iface="eth0" netns="" Apr 24 23:58:44.684381 containerd[1723]: 2026-04-24 23:58:44.646 [INFO][6450] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85" Apr 24 23:58:44.684381 containerd[1723]: 2026-04-24 23:58:44.646 [INFO][6450] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85" Apr 24 23:58:44.684381 containerd[1723]: 2026-04-24 23:58:44.673 [INFO][6458] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85" HandleID="k8s-pod-network.31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-goldmane--5b85766d88--frn4w-eth0" Apr 24 23:58:44.684381 containerd[1723]: 2026-04-24 23:58:44.673 [INFO][6458] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:58:44.684381 containerd[1723]: 2026-04-24 23:58:44.673 [INFO][6458] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:58:44.684381 containerd[1723]: 2026-04-24 23:58:44.680 [WARNING][6458] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85" HandleID="k8s-pod-network.31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-goldmane--5b85766d88--frn4w-eth0" Apr 24 23:58:44.684381 containerd[1723]: 2026-04-24 23:58:44.680 [INFO][6458] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85" HandleID="k8s-pod-network.31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-goldmane--5b85766d88--frn4w-eth0" Apr 24 23:58:44.684381 containerd[1723]: 2026-04-24 23:58:44.681 [INFO][6458] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:58:44.684381 containerd[1723]: 2026-04-24 23:58:44.683 [INFO][6450] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85" Apr 24 23:58:44.685376 containerd[1723]: time="2026-04-24T23:58:44.684514354Z" level=info msg="TearDown network for sandbox \"31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85\" successfully" Apr 24 23:58:44.685376 containerd[1723]: time="2026-04-24T23:58:44.684548755Z" level=info msg="StopPodSandbox for \"31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85\" returns successfully" Apr 24 23:58:44.685376 containerd[1723]: time="2026-04-24T23:58:44.685032662Z" level=info msg="RemovePodSandbox for \"31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85\"" Apr 24 23:58:44.685376 containerd[1723]: time="2026-04-24T23:58:44.685071762Z" level=info msg="Forcibly stopping sandbox \"31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85\"" Apr 24 23:58:44.766560 containerd[1723]: 2026-04-24 23:58:44.730 [WARNING][6473] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--b07cc1dc35-k8s-goldmane--5b85766d88--frn4w-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"105586a9-fa42-4f9c-8b39-19852c899d53", ResourceVersion:"1216", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 57, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-b07cc1dc35", ContainerID:"108b6d47dd846d1d1e6ae766b3661cf9ffe39a4648e637dce62b6061a608ab26", Pod:"goldmane-5b85766d88-frn4w", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.54.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali646b6379251", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:58:44.766560 containerd[1723]: 2026-04-24 23:58:44.730 [INFO][6473] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85" Apr 24 23:58:44.766560 containerd[1723]: 2026-04-24 23:58:44.730 [INFO][6473] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85" iface="eth0" netns="" Apr 24 23:58:44.766560 containerd[1723]: 2026-04-24 23:58:44.730 [INFO][6473] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85" Apr 24 23:58:44.766560 containerd[1723]: 2026-04-24 23:58:44.730 [INFO][6473] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85" Apr 24 23:58:44.766560 containerd[1723]: 2026-04-24 23:58:44.753 [INFO][6480] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85" HandleID="k8s-pod-network.31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-goldmane--5b85766d88--frn4w-eth0" Apr 24 23:58:44.766560 containerd[1723]: 2026-04-24 23:58:44.753 [INFO][6480] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:58:44.766560 containerd[1723]: 2026-04-24 23:58:44.753 [INFO][6480] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:58:44.766560 containerd[1723]: 2026-04-24 23:58:44.759 [WARNING][6480] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85" HandleID="k8s-pod-network.31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-goldmane--5b85766d88--frn4w-eth0" Apr 24 23:58:44.766560 containerd[1723]: 2026-04-24 23:58:44.759 [INFO][6480] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85" HandleID="k8s-pod-network.31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-goldmane--5b85766d88--frn4w-eth0" Apr 24 23:58:44.766560 containerd[1723]: 2026-04-24 23:58:44.763 [INFO][6480] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:58:44.766560 containerd[1723]: 2026-04-24 23:58:44.765 [INFO][6473] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85" Apr 24 23:58:44.767435 containerd[1723]: time="2026-04-24T23:58:44.766628896Z" level=info msg="TearDown network for sandbox \"31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85\" successfully" Apr 24 23:58:44.775232 containerd[1723]: time="2026-04-24T23:58:44.775185316Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:58:44.775394 containerd[1723]: time="2026-04-24T23:58:44.775278617Z" level=info msg="RemovePodSandbox \"31d4fbf2494bef73cd9ad0a44ccda2ac0b73886689bf0b9a5e11f0d7e36b0e85\" returns successfully" Apr 24 23:58:44.775881 containerd[1723]: time="2026-04-24T23:58:44.775850025Z" level=info msg="StopPodSandbox for \"a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324\"" Apr 24 23:58:44.842913 containerd[1723]: 2026-04-24 23:58:44.810 [WARNING][6494] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--fghhg-eth0", GenerateName:"calico-apiserver-7876b86597-", Namespace:"calico-system", SelfLink:"", UID:"4a31630e-98ec-43f7-b187-040b947d7c6b", ResourceVersion:"1117", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 57, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7876b86597", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-b07cc1dc35", ContainerID:"aa70290fd56064444df4c476e94f7c000a83c603b9ba8fe15920fa06ff9f6e82", Pod:"calico-apiserver-7876b86597-fghhg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0f9c5c60fbe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:58:44.842913 containerd[1723]: 2026-04-24 23:58:44.810 [INFO][6494] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324" Apr 24 23:58:44.842913 containerd[1723]: 2026-04-24 23:58:44.810 [INFO][6494] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324" iface="eth0" netns="" Apr 24 23:58:44.842913 containerd[1723]: 2026-04-24 23:58:44.810 [INFO][6494] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324" Apr 24 23:58:44.842913 containerd[1723]: 2026-04-24 23:58:44.810 [INFO][6494] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324" Apr 24 23:58:44.842913 containerd[1723]: 2026-04-24 23:58:44.832 [INFO][6501] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324" HandleID="k8s-pod-network.a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--fghhg-eth0" Apr 24 23:58:44.842913 containerd[1723]: 2026-04-24 23:58:44.832 [INFO][6501] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:58:44.842913 containerd[1723]: 2026-04-24 23:58:44.832 [INFO][6501] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:58:44.842913 containerd[1723]: 2026-04-24 23:58:44.838 [WARNING][6501] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324" HandleID="k8s-pod-network.a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--fghhg-eth0" Apr 24 23:58:44.842913 containerd[1723]: 2026-04-24 23:58:44.839 [INFO][6501] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324" HandleID="k8s-pod-network.a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--fghhg-eth0" Apr 24 23:58:44.842913 containerd[1723]: 2026-04-24 23:58:44.840 [INFO][6501] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:58:44.842913 containerd[1723]: 2026-04-24 23:58:44.841 [INFO][6494] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324" Apr 24 23:58:44.842913 containerd[1723]: time="2026-04-24T23:58:44.842819256Z" level=info msg="TearDown network for sandbox \"a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324\" successfully" Apr 24 23:58:44.842913 containerd[1723]: time="2026-04-24T23:58:44.842851857Z" level=info msg="StopPodSandbox for \"a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324\" returns successfully" Apr 24 23:58:44.844031 containerd[1723]: time="2026-04-24T23:58:44.843917172Z" level=info msg="RemovePodSandbox for \"a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324\"" Apr 24 23:58:44.844031 containerd[1723]: time="2026-04-24T23:58:44.843972272Z" level=info msg="Forcibly stopping sandbox \"a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324\"" Apr 24 23:58:44.909677 containerd[1723]: 2026-04-24 23:58:44.876 [WARNING][6516] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--fghhg-eth0", GenerateName:"calico-apiserver-7876b86597-", Namespace:"calico-system", SelfLink:"", UID:"4a31630e-98ec-43f7-b187-040b947d7c6b", ResourceVersion:"1117", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 57, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7876b86597", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-b07cc1dc35", ContainerID:"aa70290fd56064444df4c476e94f7c000a83c603b9ba8fe15920fa06ff9f6e82", Pod:"calico-apiserver-7876b86597-fghhg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0f9c5c60fbe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:58:44.909677 containerd[1723]: 2026-04-24 23:58:44.877 [INFO][6516] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324" Apr 24 23:58:44.909677 containerd[1723]: 2026-04-24 23:58:44.877 [INFO][6516] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324" iface="eth0" netns="" Apr 24 23:58:44.909677 containerd[1723]: 2026-04-24 23:58:44.877 [INFO][6516] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324" Apr 24 23:58:44.909677 containerd[1723]: 2026-04-24 23:58:44.877 [INFO][6516] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324" Apr 24 23:58:44.909677 containerd[1723]: 2026-04-24 23:58:44.899 [INFO][6523] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324" HandleID="k8s-pod-network.a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--fghhg-eth0" Apr 24 23:58:44.909677 containerd[1723]: 2026-04-24 23:58:44.899 [INFO][6523] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:58:44.909677 containerd[1723]: 2026-04-24 23:58:44.900 [INFO][6523] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:58:44.909677 containerd[1723]: 2026-04-24 23:58:44.905 [WARNING][6523] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324" HandleID="k8s-pod-network.a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--fghhg-eth0" Apr 24 23:58:44.909677 containerd[1723]: 2026-04-24 23:58:44.905 [INFO][6523] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324" HandleID="k8s-pod-network.a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-calico--apiserver--7876b86597--fghhg-eth0" Apr 24 23:58:44.909677 containerd[1723]: 2026-04-24 23:58:44.906 [INFO][6523] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:58:44.909677 containerd[1723]: 2026-04-24 23:58:44.907 [INFO][6516] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324" Apr 24 23:58:44.912504 containerd[1723]: time="2026-04-24T23:58:44.910622899Z" level=info msg="TearDown network for sandbox \"a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324\" successfully" Apr 24 23:58:44.919425 containerd[1723]: time="2026-04-24T23:58:44.919390321Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:58:44.919596 containerd[1723]: time="2026-04-24T23:58:44.919565024Z" level=info msg="RemovePodSandbox \"a031e21d17a77f2006a6eb0e54bf118fb052bd4ec49821c16ce0d9754cff2324\" returns successfully" Apr 24 23:58:44.920147 containerd[1723]: time="2026-04-24T23:58:44.920116431Z" level=info msg="StopPodSandbox for \"75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75\"" Apr 24 23:58:44.989845 containerd[1723]: 2026-04-24 23:58:44.956 [WARNING][6537] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--b07cc1dc35-k8s-calico--kube--controllers--7fd8994f4c--c6q9c-eth0", GenerateName:"calico-kube-controllers-7fd8994f4c-", Namespace:"calico-system", SelfLink:"", UID:"556622c8-9156-4147-b4ef-3b90cb6f4249", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 57, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fd8994f4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-b07cc1dc35", ContainerID:"d519a2d115f1a202a39547080cd3bdad897cde585ddcea7da30d90c42ccdac30", Pod:"calico-kube-controllers-7fd8994f4c-c6q9c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.54.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali78175e9dfa5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:58:44.989845 containerd[1723]: 2026-04-24 23:58:44.956 [INFO][6537] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75" Apr 24 23:58:44.989845 containerd[1723]: 2026-04-24 23:58:44.956 [INFO][6537] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75" iface="eth0" netns="" Apr 24 23:58:44.989845 containerd[1723]: 2026-04-24 23:58:44.956 [INFO][6537] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75" Apr 24 23:58:44.989845 containerd[1723]: 2026-04-24 23:58:44.956 [INFO][6537] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75" Apr 24 23:58:44.989845 containerd[1723]: 2026-04-24 23:58:44.978 [INFO][6545] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75" HandleID="k8s-pod-network.75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-calico--kube--controllers--7fd8994f4c--c6q9c-eth0" Apr 24 23:58:44.989845 containerd[1723]: 2026-04-24 23:58:44.979 [INFO][6545] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:58:44.989845 containerd[1723]: 2026-04-24 23:58:44.979 [INFO][6545] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:58:44.989845 containerd[1723]: 2026-04-24 23:58:44.986 [WARNING][6545] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75" HandleID="k8s-pod-network.75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-calico--kube--controllers--7fd8994f4c--c6q9c-eth0" Apr 24 23:58:44.989845 containerd[1723]: 2026-04-24 23:58:44.986 [INFO][6545] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75" HandleID="k8s-pod-network.75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-calico--kube--controllers--7fd8994f4c--c6q9c-eth0" Apr 24 23:58:44.989845 containerd[1723]: 2026-04-24 23:58:44.987 [INFO][6545] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:58:44.989845 containerd[1723]: 2026-04-24 23:58:44.988 [INFO][6537] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75" Apr 24 23:58:44.989845 containerd[1723]: time="2026-04-24T23:58:44.989681799Z" level=info msg="TearDown network for sandbox \"75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75\" successfully" Apr 24 23:58:44.989845 containerd[1723]: time="2026-04-24T23:58:44.989706299Z" level=info msg="StopPodSandbox for \"75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75\" returns successfully" Apr 24 23:58:44.990592 containerd[1723]: time="2026-04-24T23:58:44.990252207Z" level=info msg="RemovePodSandbox for \"75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75\"" Apr 24 23:58:44.990592 containerd[1723]: time="2026-04-24T23:58:44.990290508Z" level=info msg="Forcibly stopping sandbox \"75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75\"" Apr 24 23:58:45.056753 containerd[1723]: 2026-04-24 23:58:45.023 [WARNING][6559] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--b07cc1dc35-k8s-calico--kube--controllers--7fd8994f4c--c6q9c-eth0", GenerateName:"calico-kube-controllers-7fd8994f4c-", Namespace:"calico-system", SelfLink:"", UID:"556622c8-9156-4147-b4ef-3b90cb6f4249", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 57, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fd8994f4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-b07cc1dc35", ContainerID:"d519a2d115f1a202a39547080cd3bdad897cde585ddcea7da30d90c42ccdac30", Pod:"calico-kube-controllers-7fd8994f4c-c6q9c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.54.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali78175e9dfa5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:58:45.056753 containerd[1723]: 2026-04-24 23:58:45.024 [INFO][6559] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75" Apr 24 23:58:45.056753 containerd[1723]: 2026-04-24 23:58:45.024 [INFO][6559] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75" iface="eth0" netns="" Apr 24 23:58:45.056753 containerd[1723]: 2026-04-24 23:58:45.024 [INFO][6559] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75" Apr 24 23:58:45.056753 containerd[1723]: 2026-04-24 23:58:45.024 [INFO][6559] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75" Apr 24 23:58:45.056753 containerd[1723]: 2026-04-24 23:58:45.046 [INFO][6566] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75" HandleID="k8s-pod-network.75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-calico--kube--controllers--7fd8994f4c--c6q9c-eth0" Apr 24 23:58:45.056753 containerd[1723]: 2026-04-24 23:58:45.046 [INFO][6566] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:58:45.056753 containerd[1723]: 2026-04-24 23:58:45.046 [INFO][6566] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:58:45.056753 containerd[1723]: 2026-04-24 23:58:45.052 [WARNING][6566] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75" HandleID="k8s-pod-network.75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-calico--kube--controllers--7fd8994f4c--c6q9c-eth0" Apr 24 23:58:45.056753 containerd[1723]: 2026-04-24 23:58:45.053 [INFO][6566] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75" HandleID="k8s-pod-network.75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-calico--kube--controllers--7fd8994f4c--c6q9c-eth0" Apr 24 23:58:45.056753 containerd[1723]: 2026-04-24 23:58:45.054 [INFO][6566] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:58:45.056753 containerd[1723]: 2026-04-24 23:58:45.055 [INFO][6559] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75" Apr 24 23:58:45.057462 containerd[1723]: time="2026-04-24T23:58:45.056792733Z" level=info msg="TearDown network for sandbox \"75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75\" successfully" Apr 24 23:58:45.064672 containerd[1723]: time="2026-04-24T23:58:45.064553041Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:58:45.065146 containerd[1723]: time="2026-04-24T23:58:45.064733043Z" level=info msg="RemovePodSandbox \"75777322cb01bae42b325bcb31b16af2633a3f80c836f539a543ed47e90a0c75\" returns successfully" Apr 24 23:58:45.065831 containerd[1723]: time="2026-04-24T23:58:45.065545654Z" level=info msg="StopPodSandbox for \"51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4\"" Apr 24 23:58:45.130257 containerd[1723]: 2026-04-24 23:58:45.098 [WARNING][6581] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4hsvs-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"67de6f2b-7589-40b6-8033-934e9c5ab432", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 56, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-b07cc1dc35", ContainerID:"738a818fcaaff016bd31c767553f19688f3d922dc60b74eb233153e1a5b1dbd0", Pod:"coredns-674b8bbfcf-4hsvs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicd99befcb2b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:58:45.130257 containerd[1723]: 2026-04-24 23:58:45.098 [INFO][6581] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4" Apr 24 23:58:45.130257 containerd[1723]: 2026-04-24 23:58:45.098 [INFO][6581] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4" iface="eth0" netns="" Apr 24 23:58:45.130257 containerd[1723]: 2026-04-24 23:58:45.098 [INFO][6581] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4" Apr 24 23:58:45.130257 containerd[1723]: 2026-04-24 23:58:45.098 [INFO][6581] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4" Apr 24 23:58:45.130257 containerd[1723]: 2026-04-24 23:58:45.120 [INFO][6589] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4" HandleID="k8s-pod-network.51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4hsvs-eth0" Apr 24 23:58:45.130257 containerd[1723]: 2026-04-24 23:58:45.120 [INFO][6589] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:58:45.130257 containerd[1723]: 2026-04-24 23:58:45.120 [INFO][6589] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:58:45.130257 containerd[1723]: 2026-04-24 23:58:45.126 [WARNING][6589] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4" HandleID="k8s-pod-network.51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4hsvs-eth0" Apr 24 23:58:45.130257 containerd[1723]: 2026-04-24 23:58:45.126 [INFO][6589] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4" HandleID="k8s-pod-network.51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4hsvs-eth0" Apr 24 23:58:45.130257 containerd[1723]: 2026-04-24 23:58:45.127 [INFO][6589] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:58:45.130257 containerd[1723]: 2026-04-24 23:58:45.128 [INFO][6581] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4" Apr 24 23:58:45.130833 containerd[1723]: time="2026-04-24T23:58:45.130325055Z" level=info msg="TearDown network for sandbox \"51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4\" successfully" Apr 24 23:58:45.130833 containerd[1723]: time="2026-04-24T23:58:45.130382956Z" level=info msg="StopPodSandbox for \"51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4\" returns successfully" Apr 24 23:58:45.131538 containerd[1723]: time="2026-04-24T23:58:45.131324969Z" level=info msg="RemovePodSandbox for \"51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4\"" Apr 24 23:58:45.131538 containerd[1723]: time="2026-04-24T23:58:45.131395170Z" level=info msg="Forcibly stopping sandbox \"51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4\"" Apr 24 23:58:45.199777 containerd[1723]: 2026-04-24 23:58:45.163 [WARNING][6603] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4hsvs-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"67de6f2b-7589-40b6-8033-934e9c5ab432", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 56, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-b07cc1dc35", ContainerID:"738a818fcaaff016bd31c767553f19688f3d922dc60b74eb233153e1a5b1dbd0", Pod:"coredns-674b8bbfcf-4hsvs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicd99befcb2b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:58:45.199777 containerd[1723]: 2026-04-24 23:58:45.164 [INFO][6603] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4" Apr 24 23:58:45.199777 containerd[1723]: 2026-04-24 23:58:45.164 [INFO][6603] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4" iface="eth0" netns="" Apr 24 23:58:45.199777 containerd[1723]: 2026-04-24 23:58:45.164 [INFO][6603] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4" Apr 24 23:58:45.199777 containerd[1723]: 2026-04-24 23:58:45.164 [INFO][6603] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4" Apr 24 23:58:45.199777 containerd[1723]: 2026-04-24 23:58:45.188 [INFO][6610] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4" HandleID="k8s-pod-network.51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4hsvs-eth0" Apr 24 23:58:45.199777 containerd[1723]: 2026-04-24 23:58:45.189 [INFO][6610] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:58:45.199777 containerd[1723]: 2026-04-24 23:58:45.189 [INFO][6610] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:58:45.199777 containerd[1723]: 2026-04-24 23:58:45.195 [WARNING][6610] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4" HandleID="k8s-pod-network.51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4hsvs-eth0" Apr 24 23:58:45.199777 containerd[1723]: 2026-04-24 23:58:45.195 [INFO][6610] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4" HandleID="k8s-pod-network.51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4hsvs-eth0" Apr 24 23:58:45.199777 containerd[1723]: 2026-04-24 23:58:45.196 [INFO][6610] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:58:45.199777 containerd[1723]: 2026-04-24 23:58:45.198 [INFO][6603] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4" Apr 24 23:58:45.199777 containerd[1723]: time="2026-04-24T23:58:45.199698020Z" level=info msg="TearDown network for sandbox \"51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4\" successfully" Apr 24 23:58:45.208802 containerd[1723]: time="2026-04-24T23:58:45.208751146Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:58:45.208955 containerd[1723]: time="2026-04-24T23:58:45.208841348Z" level=info msg="RemovePodSandbox \"51b5f7c4781d54ca68721f2b392d387d20998a8e4aa706041b541978e8087ae4\" returns successfully" Apr 24 23:58:45.209676 containerd[1723]: time="2026-04-24T23:58:45.209434256Z" level=info msg="StopPodSandbox for \"ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3\"" Apr 24 23:58:45.277526 containerd[1723]: 2026-04-24 23:58:45.243 [WARNING][6624] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4cgp8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d89c2d22-a648-4465-85aa-6b284aea19c0", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 56, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-b07cc1dc35", ContainerID:"79bbbddd77b76c6e40512f58e561df084988b2f0029264eb7463b8018bb1d9c7", Pod:"coredns-674b8bbfcf-4cgp8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4efccf0c213", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:58:45.277526 containerd[1723]: 2026-04-24 23:58:45.244 [INFO][6624] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3" Apr 24 23:58:45.277526 containerd[1723]: 2026-04-24 23:58:45.244 [INFO][6624] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3" iface="eth0" netns="" Apr 24 23:58:45.277526 containerd[1723]: 2026-04-24 23:58:45.244 [INFO][6624] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3" Apr 24 23:58:45.277526 containerd[1723]: 2026-04-24 23:58:45.244 [INFO][6624] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3" Apr 24 23:58:45.277526 containerd[1723]: 2026-04-24 23:58:45.266 [INFO][6631] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3" HandleID="k8s-pod-network.ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4cgp8-eth0" Apr 24 23:58:45.277526 containerd[1723]: 2026-04-24 23:58:45.267 [INFO][6631] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:58:45.277526 containerd[1723]: 2026-04-24 23:58:45.267 [INFO][6631] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:58:45.277526 containerd[1723]: 2026-04-24 23:58:45.273 [WARNING][6631] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3" HandleID="k8s-pod-network.ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4cgp8-eth0" Apr 24 23:58:45.277526 containerd[1723]: 2026-04-24 23:58:45.273 [INFO][6631] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3" HandleID="k8s-pod-network.ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4cgp8-eth0" Apr 24 23:58:45.277526 containerd[1723]: 2026-04-24 23:58:45.274 [INFO][6631] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:58:45.277526 containerd[1723]: 2026-04-24 23:58:45.276 [INFO][6624] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3" Apr 24 23:58:45.278189 containerd[1723]: time="2026-04-24T23:58:45.277612004Z" level=info msg="TearDown network for sandbox \"ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3\" successfully" Apr 24 23:58:45.278189 containerd[1723]: time="2026-04-24T23:58:45.277665305Z" level=info msg="StopPodSandbox for \"ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3\" returns successfully" Apr 24 23:58:45.278396 containerd[1723]: time="2026-04-24T23:58:45.278337014Z" level=info msg="RemovePodSandbox for \"ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3\"" Apr 24 23:58:45.278497 containerd[1723]: time="2026-04-24T23:58:45.278410415Z" level=info msg="Forcibly stopping sandbox \"ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3\"" Apr 24 23:58:45.348955 containerd[1723]: 2026-04-24 23:58:45.317 [WARNING][6645] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4cgp8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d89c2d22-a648-4465-85aa-6b284aea19c0", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 56, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-b07cc1dc35", ContainerID:"79bbbddd77b76c6e40512f58e561df084988b2f0029264eb7463b8018bb1d9c7", Pod:"coredns-674b8bbfcf-4cgp8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4efccf0c213", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:58:45.348955 containerd[1723]: 2026-04-24 23:58:45.317 [INFO][6645] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3" Apr 24 23:58:45.348955 containerd[1723]: 2026-04-24 23:58:45.317 [INFO][6645] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3" iface="eth0" netns="" Apr 24 23:58:45.348955 containerd[1723]: 2026-04-24 23:58:45.317 [INFO][6645] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3" Apr 24 23:58:45.348955 containerd[1723]: 2026-04-24 23:58:45.317 [INFO][6645] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3" Apr 24 23:58:45.348955 containerd[1723]: 2026-04-24 23:58:45.338 [INFO][6653] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3" HandleID="k8s-pod-network.ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4cgp8-eth0" Apr 24 23:58:45.348955 containerd[1723]: 2026-04-24 23:58:45.338 [INFO][6653] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:58:45.348955 containerd[1723]: 2026-04-24 23:58:45.338 [INFO][6653] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:58:45.348955 containerd[1723]: 2026-04-24 23:58:45.344 [WARNING][6653] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3" HandleID="k8s-pod-network.ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4cgp8-eth0" Apr 24 23:58:45.348955 containerd[1723]: 2026-04-24 23:58:45.344 [INFO][6653] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3" HandleID="k8s-pod-network.ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3" Workload="ci--4081.3.6--n--b07cc1dc35-k8s-coredns--674b8bbfcf--4cgp8-eth0" Apr 24 23:58:45.348955 containerd[1723]: 2026-04-24 23:58:45.346 [INFO][6653] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:58:45.348955 containerd[1723]: 2026-04-24 23:58:45.347 [INFO][6645] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3" Apr 24 23:58:45.349946 containerd[1723]: time="2026-04-24T23:58:45.348996697Z" level=info msg="TearDown network for sandbox \"ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3\" successfully" Apr 24 23:58:45.358162 containerd[1723]: time="2026-04-24T23:58:45.358110324Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:58:45.358525 containerd[1723]: time="2026-04-24T23:58:45.358196525Z" level=info msg="RemovePodSandbox \"ff73f5b94c4f0c88998fe4e1c4aa47b50678de2b2f70af26f51dddc520bae7d3\" returns successfully" Apr 24 23:58:55.455640 systemd[1]: Started sshd@7-10.0.0.29:22-4.175.71.9:33386.service - OpenSSH per-connection server daemon (4.175.71.9:33386). Apr 24 23:58:55.572957 sshd[6670]: Accepted publickey for core from 4.175.71.9 port 33386 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 24 23:58:55.574627 sshd[6670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:58:55.579818 systemd-logind[1712]: New session 10 of user core. Apr 24 23:58:55.586673 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 24 23:58:55.752644 sshd[6670]: pam_unix(sshd:session): session closed for user core Apr 24 23:58:55.757122 systemd[1]: sshd@7-10.0.0.29:22-4.175.71.9:33386.service: Deactivated successfully. Apr 24 23:58:55.760313 systemd[1]: session-10.scope: Deactivated successfully. Apr 24 23:58:55.762989 systemd-logind[1712]: Session 10 logged out. Waiting for processes to exit. Apr 24 23:58:55.764151 systemd-logind[1712]: Removed session 10. Apr 24 23:59:00.779113 systemd[1]: Started sshd@8-10.0.0.29:22-4.175.71.9:33392.service - OpenSSH per-connection server daemon (4.175.71.9:33392). Apr 24 23:59:00.892529 sshd[6723]: Accepted publickey for core from 4.175.71.9 port 33392 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 24 23:59:00.894075 sshd[6723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:59:00.900415 systemd-logind[1712]: New session 11 of user core. Apr 24 23:59:00.903550 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 24 23:59:01.058389 sshd[6723]: pam_unix(sshd:session): session closed for user core Apr 24 23:59:01.062793 systemd-logind[1712]: Session 11 logged out. Waiting for processes to exit. Apr 24 23:59:01.063749 systemd[1]: sshd@8-10.0.0.29:22-4.175.71.9:33392.service: Deactivated successfully. Apr 24 23:59:01.065871 systemd[1]: session-11.scope: Deactivated successfully. Apr 24 23:59:01.067187 systemd-logind[1712]: Removed session 11. Apr 24 23:59:06.088690 systemd[1]: Started sshd@9-10.0.0.29:22-4.175.71.9:51724.service - OpenSSH per-connection server daemon (4.175.71.9:51724). Apr 24 23:59:06.202804 sshd[6770]: Accepted publickey for core from 4.175.71.9 port 51724 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 24 23:59:06.204419 sshd[6770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:59:06.209595 systemd-logind[1712]: New session 12 of user core. Apr 24 23:59:06.213513 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 24 23:59:06.371851 sshd[6770]: pam_unix(sshd:session): session closed for user core Apr 24 23:59:06.376384 systemd-logind[1712]: Session 12 logged out. Waiting for processes to exit. Apr 24 23:59:06.377087 systemd[1]: sshd@9-10.0.0.29:22-4.175.71.9:51724.service: Deactivated successfully. Apr 24 23:59:06.379482 systemd[1]: session-12.scope: Deactivated successfully. Apr 24 23:59:06.380882 systemd-logind[1712]: Removed session 12. Apr 24 23:59:11.405671 systemd[1]: Started sshd@10-10.0.0.29:22-4.175.71.9:51734.service - OpenSSH per-connection server daemon (4.175.71.9:51734). Apr 24 23:59:11.515370 sshd[6803]: Accepted publickey for core from 4.175.71.9 port 51734 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 24 23:59:11.516870 sshd[6803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:59:11.521142 systemd-logind[1712]: New session 13 of user core. Apr 24 23:59:11.525734 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 24 23:59:11.679185 sshd[6803]: pam_unix(sshd:session): session closed for user core Apr 24 23:59:11.683794 systemd[1]: sshd@10-10.0.0.29:22-4.175.71.9:51734.service: Deactivated successfully. Apr 24 23:59:11.686243 systemd[1]: session-13.scope: Deactivated successfully. Apr 24 23:59:11.687601 systemd-logind[1712]: Session 13 logged out. Waiting for processes to exit. Apr 24 23:59:11.688618 systemd-logind[1712]: Removed session 13. Apr 24 23:59:12.457564 systemd[1]: run-containerd-runc-k8s.io-8e1af0ebc58dd4d640ff6e004a50f78a354ed5b604cfc9971aa1ace2e3471d1e-runc.ZAdINJ.mount: Deactivated successfully. Apr 24 23:59:16.713706 systemd[1]: Started sshd@11-10.0.0.29:22-4.175.71.9:59608.service - OpenSSH per-connection server daemon (4.175.71.9:59608). Apr 24 23:59:16.828484 sshd[6845]: Accepted publickey for core from 4.175.71.9 port 59608 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 24 23:59:16.829117 sshd[6845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:59:16.833437 systemd-logind[1712]: New session 14 of user core. Apr 24 23:59:16.838533 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 24 23:59:16.995588 sshd[6845]: pam_unix(sshd:session): session closed for user core Apr 24 23:59:16.999977 systemd-logind[1712]: Session 14 logged out. Waiting for processes to exit. Apr 24 23:59:17.000668 systemd[1]: sshd@11-10.0.0.29:22-4.175.71.9:59608.service: Deactivated successfully. Apr 24 23:59:17.003007 systemd[1]: session-14.scope: Deactivated successfully. Apr 24 23:59:17.004056 systemd-logind[1712]: Removed session 14. Apr 24 23:59:22.022016 systemd[1]: Started sshd@12-10.0.0.29:22-4.175.71.9:59614.service - OpenSSH per-connection server daemon (4.175.71.9:59614). Apr 24 23:59:22.136154 sshd[6909]: Accepted publickey for core from 4.175.71.9 port 59614 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 24 23:59:22.137709 sshd[6909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:59:22.145391 systemd-logind[1712]: New session 15 of user core. Apr 24 23:59:22.148525 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 24 23:59:22.303489 sshd[6909]: pam_unix(sshd:session): session closed for user core Apr 24 23:59:22.308574 systemd-logind[1712]: Session 15 logged out. Waiting for processes to exit. Apr 24 23:59:22.309287 systemd[1]: sshd@12-10.0.0.29:22-4.175.71.9:59614.service: Deactivated successfully. Apr 24 23:59:22.311758 systemd[1]: session-15.scope: Deactivated successfully. Apr 24 23:59:22.312867 systemd-logind[1712]: Removed session 15. Apr 24 23:59:22.326585 systemd[1]: Started sshd@13-10.0.0.29:22-4.175.71.9:59622.service - OpenSSH per-connection server daemon (4.175.71.9:59622). Apr 24 23:59:22.445384 sshd[6925]: Accepted publickey for core from 4.175.71.9 port 59622 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 24 23:59:22.446573 sshd[6925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:59:22.451569 systemd-logind[1712]: New session 16 of user core. Apr 24 23:59:22.454524 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 24 23:59:22.648324 sshd[6925]: pam_unix(sshd:session): session closed for user core Apr 24 23:59:22.656953 systemd[1]: sshd@13-10.0.0.29:22-4.175.71.9:59622.service: Deactivated successfully. Apr 24 23:59:22.661219 systemd[1]: session-16.scope: Deactivated successfully. Apr 24 23:59:22.663845 systemd-logind[1712]: Session 16 logged out. Waiting for processes to exit. Apr 24 23:59:22.683644 systemd[1]: Started sshd@14-10.0.0.29:22-4.175.71.9:59630.service - OpenSSH per-connection server daemon (4.175.71.9:59630). Apr 24 23:59:22.685444 systemd-logind[1712]: Removed session 16. Apr 24 23:59:22.797449 sshd[6936]: Accepted publickey for core from 4.175.71.9 port 59630 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 24 23:59:22.798912 sshd[6936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:59:22.804730 systemd-logind[1712]: New session 17 of user core. Apr 24 23:59:22.806529 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 24 23:59:22.965612 sshd[6936]: pam_unix(sshd:session): session closed for user core Apr 24 23:59:22.969372 systemd[1]: sshd@14-10.0.0.29:22-4.175.71.9:59630.service: Deactivated successfully. Apr 24 23:59:22.970594 systemd-logind[1712]: Session 17 logged out. Waiting for processes to exit. Apr 24 23:59:22.972111 systemd[1]: session-17.scope: Deactivated successfully. Apr 24 23:59:22.974737 systemd-logind[1712]: Removed session 17. Apr 24 23:59:27.997652 systemd[1]: Started sshd@15-10.0.0.29:22-4.175.71.9:55924.service - OpenSSH per-connection server daemon (4.175.71.9:55924). Apr 24 23:59:28.108138 sshd[6948]: Accepted publickey for core from 4.175.71.9 port 55924 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 24 23:59:28.108851 sshd[6948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:59:28.113967 systemd-logind[1712]: New session 18 of user core. Apr 24 23:59:28.118523 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 24 23:59:28.272921 sshd[6948]: pam_unix(sshd:session): session closed for user core Apr 24 23:59:28.278785 systemd[1]: sshd@15-10.0.0.29:22-4.175.71.9:55924.service: Deactivated successfully. Apr 24 23:59:28.281552 systemd[1]: session-18.scope: Deactivated successfully. Apr 24 23:59:28.282484 systemd-logind[1712]: Session 18 logged out. Waiting for processes to exit. Apr 24 23:59:28.283876 systemd-logind[1712]: Removed session 18. Apr 24 23:59:28.298643 systemd[1]: Started sshd@16-10.0.0.29:22-4.175.71.9:55932.service - OpenSSH per-connection server daemon (4.175.71.9:55932). Apr 24 23:59:28.418367 sshd[6960]: Accepted publickey for core from 4.175.71.9 port 55932 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 24 23:59:28.420063 sshd[6960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:59:28.424449 systemd-logind[1712]: New session 19 of user core. Apr 24 23:59:28.430517 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 24 23:59:28.640407 sshd[6960]: pam_unix(sshd:session): session closed for user core Apr 24 23:59:28.644580 systemd[1]: sshd@16-10.0.0.29:22-4.175.71.9:55932.service: Deactivated successfully. Apr 24 23:59:28.647432 systemd[1]: session-19.scope: Deactivated successfully. Apr 24 23:59:28.648260 systemd-logind[1712]: Session 19 logged out. Waiting for processes to exit. Apr 24 23:59:28.649249 systemd-logind[1712]: Removed session 19. Apr 24 23:59:28.663646 systemd[1]: Started sshd@17-10.0.0.29:22-4.175.71.9:55940.service - OpenSSH per-connection server daemon (4.175.71.9:55940). Apr 24 23:59:28.776996 sshd[6971]: Accepted publickey for core from 4.175.71.9 port 55940 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 24 23:59:28.778645 sshd[6971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:59:28.783482 systemd-logind[1712]: New session 20 of user core. Apr 24 23:59:28.788499 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 24 23:59:29.626071 sshd[6971]: pam_unix(sshd:session): session closed for user core Apr 24 23:59:29.631712 systemd[1]: sshd@17-10.0.0.29:22-4.175.71.9:55940.service: Deactivated successfully. Apr 24 23:59:29.637526 systemd[1]: session-20.scope: Deactivated successfully. Apr 24 23:59:29.640075 systemd-logind[1712]: Session 20 logged out. Waiting for processes to exit. Apr 24 23:59:29.662649 systemd[1]: Started sshd@18-10.0.0.29:22-4.175.71.9:55942.service - OpenSSH per-connection server daemon (4.175.71.9:55942). Apr 24 23:59:29.664090 systemd-logind[1712]: Removed session 20. Apr 24 23:59:29.789508 sshd[6994]: Accepted publickey for core from 4.175.71.9 port 55942 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 24 23:59:29.791983 sshd[6994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:59:29.798508 systemd-logind[1712]: New session 21 of user core. Apr 24 23:59:29.803519 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 24 23:59:30.087684 sshd[6994]: pam_unix(sshd:session): session closed for user core Apr 24 23:59:30.092494 systemd-logind[1712]: Session 21 logged out. Waiting for processes to exit. Apr 24 23:59:30.093638 systemd[1]: sshd@18-10.0.0.29:22-4.175.71.9:55942.service: Deactivated successfully. Apr 24 23:59:30.096283 systemd[1]: session-21.scope: Deactivated successfully. Apr 24 23:59:30.098187 systemd-logind[1712]: Removed session 21. Apr 24 23:59:30.110321 systemd[1]: Started sshd@19-10.0.0.29:22-4.175.71.9:55952.service - OpenSSH per-connection server daemon (4.175.71.9:55952). Apr 24 23:59:30.232309 sshd[7008]: Accepted publickey for core from 4.175.71.9 port 55952 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 24 23:59:30.233829 sshd[7008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:59:30.237961 systemd-logind[1712]: New session 22 of user core. Apr 24 23:59:30.244510 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 24 23:59:30.395329 sshd[7008]: pam_unix(sshd:session): session closed for user core Apr 24 23:59:30.400049 systemd[1]: sshd@19-10.0.0.29:22-4.175.71.9:55952.service: Deactivated successfully. Apr 24 23:59:30.402406 systemd[1]: session-22.scope: Deactivated successfully. Apr 24 23:59:30.403273 systemd-logind[1712]: Session 22 logged out. Waiting for processes to exit. Apr 24 23:59:30.404706 systemd-logind[1712]: Removed session 22. Apr 24 23:59:30.553020 systemd[1]: run-containerd-runc-k8s.io-89454f17dae1ac4e7da87566181218ee3c5951c852b6b5b908f5506a03961eea-runc.9oYtBK.mount: Deactivated successfully. Apr 24 23:59:35.423668 systemd[1]: Started sshd@20-10.0.0.29:22-4.175.71.9:59780.service - OpenSSH per-connection server daemon (4.175.71.9:59780). Apr 24 23:59:35.539382 sshd[7041]: Accepted publickey for core from 4.175.71.9 port 59780 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 24 23:59:35.540468 sshd[7041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:59:35.545612 systemd-logind[1712]: New session 23 of user core. Apr 24 23:59:35.549500 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 24 23:59:35.708940 sshd[7041]: pam_unix(sshd:session): session closed for user core Apr 24 23:59:35.712543 systemd[1]: sshd@20-10.0.0.29:22-4.175.71.9:59780.service: Deactivated successfully. Apr 24 23:59:35.715054 systemd[1]: session-23.scope: Deactivated successfully. Apr 24 23:59:35.716976 systemd-logind[1712]: Session 23 logged out. Waiting for processes to exit. Apr 24 23:59:35.719175 systemd-logind[1712]: Removed session 23. Apr 24 23:59:37.600996 systemd[1]: run-containerd-runc-k8s.io-7b85262b30694679016d604dfa7947f20880ac491d080b16ea3db87a4cff3e8f-runc.COE5tK.mount: Deactivated successfully. Apr 24 23:59:40.738663 systemd[1]: Started sshd@21-10.0.0.29:22-4.175.71.9:59788.service - OpenSSH per-connection server daemon (4.175.71.9:59788). Apr 24 23:59:40.858521 sshd[7073]: Accepted publickey for core from 4.175.71.9 port 59788 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 24 23:59:40.860179 sshd[7073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:59:40.864295 systemd-logind[1712]: New session 24 of user core. Apr 24 23:59:40.870545 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 24 23:59:41.023245 sshd[7073]: pam_unix(sshd:session): session closed for user core Apr 24 23:59:41.027415 systemd[1]: sshd@21-10.0.0.29:22-4.175.71.9:59788.service: Deactivated successfully. Apr 24 23:59:41.029728 systemd[1]: session-24.scope: Deactivated successfully. Apr 24 23:59:41.030786 systemd-logind[1712]: Session 24 logged out. Waiting for processes to exit. Apr 24 23:59:41.031890 systemd-logind[1712]: Removed session 24. Apr 24 23:59:46.049999 systemd[1]: Started sshd@22-10.0.0.29:22-4.175.71.9:43640.service - OpenSSH per-connection server daemon (4.175.71.9:43640). Apr 24 23:59:46.171797 sshd[7108]: Accepted publickey for core from 4.175.71.9 port 43640 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 24 23:59:46.173394 sshd[7108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:59:46.179164 systemd-logind[1712]: New session 25 of user core. Apr 24 23:59:46.182972 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 24 23:59:46.336603 sshd[7108]: pam_unix(sshd:session): session closed for user core Apr 24 23:59:46.340635 systemd-logind[1712]: Session 25 logged out. Waiting for processes to exit. Apr 24 23:59:46.341585 systemd[1]: sshd@22-10.0.0.29:22-4.175.71.9:43640.service: Deactivated successfully. Apr 24 23:59:46.344219 systemd[1]: session-25.scope: Deactivated successfully. Apr 24 23:59:46.345332 systemd-logind[1712]: Removed session 25. Apr 24 23:59:51.366668 systemd[1]: Started sshd@23-10.0.0.29:22-4.175.71.9:43652.service - OpenSSH per-connection server daemon (4.175.71.9:43652). Apr 24 23:59:51.478270 sshd[7120]: Accepted publickey for core from 4.175.71.9 port 43652 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 24 23:59:51.479935 sshd[7120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:59:51.484867 systemd-logind[1712]: New session 26 of user core. Apr 24 23:59:51.494502 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 24 23:59:51.651819 sshd[7120]: pam_unix(sshd:session): session closed for user core Apr 24 23:59:51.656442 systemd[1]: sshd@23-10.0.0.29:22-4.175.71.9:43652.service: Deactivated successfully. Apr 24 23:59:51.658941 systemd[1]: session-26.scope: Deactivated successfully. Apr 24 23:59:51.659777 systemd-logind[1712]: Session 26 logged out. Waiting for processes to exit. Apr 24 23:59:51.660817 systemd-logind[1712]: Removed session 26. Apr 24 23:59:56.680654 systemd[1]: Started sshd@24-10.0.0.29:22-4.175.71.9:37598.service - OpenSSH per-connection server daemon (4.175.71.9:37598). Apr 24 23:59:56.797322 sshd[7134]: Accepted publickey for core from 4.175.71.9 port 37598 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 24 23:59:56.798896 sshd[7134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:59:56.806531 systemd-logind[1712]: New session 27 of user core. Apr 24 23:59:56.813503 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 24 23:59:56.969931 sshd[7134]: pam_unix(sshd:session): session closed for user core Apr 24 23:59:56.973552 systemd-logind[1712]: Session 27 logged out. Waiting for processes to exit. Apr 24 23:59:56.974130 systemd[1]: sshd@24-10.0.0.29:22-4.175.71.9:37598.service: Deactivated successfully. Apr 24 23:59:56.976935 systemd[1]: session-27.scope: Deactivated successfully. Apr 24 23:59:56.979224 systemd-logind[1712]: Removed session 27. Apr 25 00:00:00.554187 systemd[1]: run-containerd-runc-k8s.io-89454f17dae1ac4e7da87566181218ee3c5951c852b6b5b908f5506a03961eea-runc.uDDBML.mount: Deactivated successfully. Apr 25 00:00:00.561658 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. Apr 25 00:00:00.582111 systemd[1]: logrotate.service: Deactivated successfully. Apr 25 00:00:01.996654 systemd[1]: Started sshd@25-10.0.0.29:22-4.175.71.9:37610.service - OpenSSH per-connection server daemon (4.175.71.9:37610). Apr 25 00:00:02.117189 sshd[7191]: Accepted publickey for core from 4.175.71.9 port 37610 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 25 00:00:02.118956 sshd[7191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:00:02.124958 systemd-logind[1712]: New session 28 of user core. Apr 25 00:00:02.130560 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 25 00:00:02.285717 sshd[7191]: pam_unix(sshd:session): session closed for user core Apr 25 00:00:02.290576 systemd[1]: sshd@25-10.0.0.29:22-4.175.71.9:37610.service: Deactivated successfully. Apr 25 00:00:02.293207 systemd[1]: session-28.scope: Deactivated successfully. Apr 25 00:00:02.294447 systemd-logind[1712]: Session 28 logged out. Waiting for processes to exit. Apr 25 00:00:02.295602 systemd-logind[1712]: Removed session 28.