Mar 13 00:55:44.627088 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Mar 12 22:08:29 -00 2026 Mar 13 00:55:44.627115 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:55:44.627128 kernel: BIOS-provided physical RAM map: Mar 13 00:55:44.627143 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 13 00:55:44.627153 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 13 00:55:44.627163 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 13 00:55:44.627174 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Mar 13 00:55:44.627183 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 13 00:55:44.627288 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Mar 13 00:55:44.627299 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Mar 13 00:55:44.627307 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Mar 13 00:55:44.627316 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Mar 13 00:55:44.627329 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Mar 13 00:55:44.627337 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Mar 13 00:55:44.627347 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Mar 13 00:55:44.627356 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 13 00:55:44.627675 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Mar 13 00:55:44.627697 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Mar 13 00:55:44.627707 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Mar 13 00:55:44.627715 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Mar 13 00:55:44.627724 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Mar 13 00:55:44.627733 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 13 00:55:44.627741 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 13 00:55:44.627750 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 13 00:55:44.627758 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Mar 13 00:55:44.627767 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 13 00:55:44.627776 kernel: NX (Execute Disable) protection: active Mar 13 00:55:44.627787 kernel: APIC: Static calls initialized Mar 13 00:55:44.627802 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Mar 13 00:55:44.627813 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Mar 13 00:55:44.627822 kernel: extended physical RAM map: Mar 13 00:55:44.627831 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 13 00:55:44.627839 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 13 00:55:44.627848 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 13 00:55:44.627857 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Mar 13 00:55:44.627866 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 13 00:55:44.627874 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Mar 13 00:55:44.627884 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Mar 13 00:55:44.627896 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Mar 13 00:55:44.627910 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Mar 13 00:55:44.627924 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Mar 13 00:55:44.627933 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Mar 13 00:55:44.627942 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Mar 13 00:55:44.627951 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Mar 13 00:55:44.627964 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Mar 13 00:55:44.627976 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Mar 13 00:55:44.627987 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Mar 13 00:55:44.627998 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 13 00:55:44.628007 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Mar 13 00:55:44.628016 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Mar 13 00:55:44.628025 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Mar 13 00:55:44.628034 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Mar 13 00:55:44.628043 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Mar 13 00:55:44.628053 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 13 00:55:44.628062 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 13 00:55:44.628080 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 13 00:55:44.628090 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Mar 13 00:55:44.628099 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 13 00:55:44.628199 kernel: efi: EFI v2.7 by EDK II Mar 13 00:55:44.628210 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Mar 13 00:55:44.628312 kernel: random: crng init done Mar 13 00:55:44.628324 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Mar 13 00:55:44.628428 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Mar 13 00:55:44.628803 kernel: secureboot: Secure boot disabled Mar 13 00:55:44.628817 kernel: SMBIOS 2.8 present. Mar 13 00:55:44.628829 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Mar 13 00:55:44.628844 kernel: DMI: Memory slots populated: 1/1 Mar 13 00:55:44.628853 kernel: Hypervisor detected: KVM Mar 13 00:55:44.628862 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Mar 13 00:55:44.628871 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 13 00:55:44.628880 kernel: kvm-clock: using sched offset of 35215400732 cycles Mar 13 00:55:44.628890 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 13 00:55:44.628900 kernel: tsc: Detected 2445.426 MHz processor Mar 13 00:55:44.628913 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 13 00:55:44.628924 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 13 00:55:44.628936 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Mar 13 00:55:44.628946 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 13 00:55:44.628961 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 13 00:55:44.628971 kernel: Using GB pages for direct mapping Mar 13 00:55:44.628980 kernel: ACPI: Early table checksum verification disabled Mar 13 00:55:44.628989 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Mar 13 00:55:44.628998 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 13 00:55:44.629008 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:55:44.629020 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:55:44.629031 kernel: ACPI: FACS 0x000000009CBDD000 000040 Mar 13 00:55:44.629048 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:55:44.629057 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:55:44.629067 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:55:44.629077 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:55:44.629086 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 13 00:55:44.629096 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Mar 13 00:55:44.629105 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Mar 13 00:55:44.629114 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Mar 13 00:55:44.629126 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Mar 13 00:55:44.629142 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Mar 13 00:55:44.629153 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Mar 13 00:55:44.629163 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Mar 13 00:55:44.629172 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Mar 13 00:55:44.629182 kernel: No NUMA configuration found Mar 13 00:55:44.629191 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Mar 13 00:55:44.629200 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Mar 13 00:55:44.629210 kernel: Zone ranges: Mar 13 00:55:44.629219 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 13 00:55:44.629236 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Mar 13 00:55:44.629247 kernel: Normal empty Mar 13 00:55:44.629259 kernel: Device empty Mar 13 00:55:44.629268 kernel: Movable zone start for each node Mar 13 00:55:44.629277 kernel: Early memory node ranges Mar 13 00:55:44.629286 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 13 00:55:44.629398 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Mar 13 00:55:44.629410 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Mar 13 00:55:44.629419 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Mar 13 00:55:44.629428 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Mar 13 00:55:44.629819 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Mar 13 00:55:44.629832 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Mar 13 00:55:44.629842 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Mar 13 00:55:44.629852 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Mar 13 00:55:44.629958 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 13 00:55:44.629983 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 13 00:55:44.629996 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Mar 13 00:55:44.630006 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 13 00:55:44.630015 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Mar 13 00:55:44.630025 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Mar 13 00:55:44.630037 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Mar 13 00:55:44.630049 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Mar 13 00:55:44.630063 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Mar 13 00:55:44.630073 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 13 00:55:44.630083 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 13 00:55:44.630092 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 13 00:55:44.630102 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 13 00:55:44.630116 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 13 00:55:44.630128 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 13 00:55:44.630141 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 13 00:55:44.630151 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 13 00:55:44.630161 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 13 00:55:44.630170 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 13 00:55:44.630180 kernel: TSC deadline timer available Mar 13 00:55:44.630190 kernel: CPU topo: Max. logical packages: 1 Mar 13 00:55:44.630199 kernel: CPU topo: Max. logical dies: 1 Mar 13 00:55:44.630215 kernel: CPU topo: Max. dies per package: 1 Mar 13 00:55:44.630228 kernel: CPU topo: Max. threads per core: 1 Mar 13 00:55:44.630238 kernel: CPU topo: Num. cores per package: 4 Mar 13 00:55:44.630247 kernel: CPU topo: Num. threads per package: 4 Mar 13 00:55:44.630257 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Mar 13 00:55:44.630266 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 13 00:55:44.630276 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 13 00:55:44.630285 kernel: kvm-guest: setup PV sched yield Mar 13 00:55:44.630295 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Mar 13 00:55:44.630314 kernel: Booting paravirtualized kernel on KVM Mar 13 00:55:44.630324 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 13 00:55:44.630334 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 13 00:55:44.630344 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Mar 13 00:55:44.630353 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Mar 13 00:55:44.630363 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 13 00:55:44.630373 kernel: kvm-guest: PV spinlocks enabled Mar 13 00:55:44.630384 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 13 00:55:44.630711 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:55:44.630735 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 13 00:55:44.630747 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 13 00:55:44.630757 kernel: Fallback order for Node 0: 0 Mar 13 00:55:44.630766 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Mar 13 00:55:44.630776 kernel: Policy zone: DMA32 Mar 13 00:55:44.630786 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 13 00:55:44.630796 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 13 00:55:44.630806 kernel: ftrace: allocating 40099 entries in 157 pages Mar 13 00:55:44.630820 kernel: ftrace: allocated 157 pages with 5 groups Mar 13 00:55:44.630833 kernel: Dynamic Preempt: voluntary Mar 13 00:55:44.630844 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 13 00:55:44.630858 kernel: rcu: RCU event tracing is enabled. Mar 13 00:55:44.630869 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 13 00:55:44.630879 kernel: Trampoline variant of Tasks RCU enabled. Mar 13 00:55:44.630889 kernel: Rude variant of Tasks RCU enabled. Mar 13 00:55:44.630898 kernel: Tracing variant of Tasks RCU enabled. Mar 13 00:55:44.630908 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 13 00:55:44.630918 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 13 00:55:44.631035 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 13 00:55:44.631050 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 13 00:55:44.631062 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 13 00:55:44.631073 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 13 00:55:44.631083 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 13 00:55:44.631093 kernel: Console: colour dummy device 80x25 Mar 13 00:55:44.631103 kernel: printk: legacy console [ttyS0] enabled Mar 13 00:55:44.631113 kernel: ACPI: Core revision 20240827 Mar 13 00:55:44.631123 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 13 00:55:44.631137 kernel: APIC: Switch to symmetric I/O mode setup Mar 13 00:55:44.631150 kernel: x2apic enabled Mar 13 00:55:44.631162 kernel: APIC: Switched APIC routing to: physical x2apic Mar 13 00:55:44.631175 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 13 00:55:44.631185 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 13 00:55:44.631195 kernel: kvm-guest: setup PV IPIs Mar 13 00:55:44.631205 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 13 00:55:44.631215 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 13 00:55:44.631225 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 13 00:55:44.631239 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 13 00:55:44.631251 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 13 00:55:44.631263 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 13 00:55:44.631275 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 13 00:55:44.631287 kernel: Spectre V2 : Mitigation: Retpolines Mar 13 00:55:44.631297 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 13 00:55:44.631307 kernel: Speculative Store Bypass: Vulnerable Mar 13 00:55:44.631317 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 13 00:55:44.631332 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 13 00:55:44.631782 kernel: active return thunk: srso_alias_return_thunk Mar 13 00:55:44.631797 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 13 00:55:44.631807 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 13 00:55:44.631817 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 13 00:55:44.631827 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 13 00:55:44.631837 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 13 00:55:44.631847 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 13 00:55:44.631856 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 13 00:55:44.631876 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 13 00:55:44.631887 kernel: Freeing SMP alternatives memory: 32K Mar 13 00:55:44.631897 kernel: pid_max: default: 32768 minimum: 301 Mar 13 00:55:44.631906 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 13 00:55:44.631916 kernel: landlock: Up and running. Mar 13 00:55:44.631925 kernel: SELinux: Initializing. Mar 13 00:55:44.631935 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 13 00:55:44.631945 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 13 00:55:44.631958 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 13 00:55:44.631973 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 13 00:55:44.631983 kernel: signal: max sigframe size: 1776 Mar 13 00:55:44.631992 kernel: rcu: Hierarchical SRCU implementation. Mar 13 00:55:44.632003 kernel: rcu: Max phase no-delay instances is 400. Mar 13 00:55:44.632013 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 13 00:55:44.632022 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 13 00:55:44.632036 kernel: smp: Bringing up secondary CPUs ... Mar 13 00:55:44.632046 kernel: smpboot: x86: Booting SMP configuration: Mar 13 00:55:44.632055 kernel: .... node #0, CPUs: #1 #2 #3 Mar 13 00:55:44.632069 kernel: smp: Brought up 1 node, 4 CPUs Mar 13 00:55:44.632079 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 13 00:55:44.632090 kernel: Memory: 2414476K/2565800K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46200K init, 2560K bss, 145388K reserved, 0K cma-reserved) Mar 13 00:55:44.632099 kernel: devtmpfs: initialized Mar 13 00:55:44.632113 kernel: x86/mm: Memory block size: 128MB Mar 13 00:55:44.632123 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Mar 13 00:55:44.632132 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Mar 13 00:55:44.632229 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Mar 13 00:55:44.632244 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Mar 13 00:55:44.632254 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Mar 13 00:55:44.632268 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Mar 13 00:55:44.632278 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 13 00:55:44.632288 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 13 00:55:44.632297 kernel: pinctrl core: initialized pinctrl subsystem Mar 13 00:55:44.632307 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 13 00:55:44.632317 kernel: audit: initializing netlink subsys (disabled) Mar 13 00:55:44.632327 kernel: audit: type=2000 audit(1773363330.301:1): state=initialized audit_enabled=0 res=1 Mar 13 00:55:44.632343 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 13 00:55:44.632355 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 13 00:55:44.632364 kernel: cpuidle: using governor menu Mar 13 00:55:44.632374 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 13 00:55:44.632384 kernel: dca service started, version 1.12.1 Mar 13 00:55:44.632394 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Mar 13 00:55:44.632404 kernel: PCI: Using configuration type 1 for base access Mar 13 00:55:44.632414 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 13 00:55:44.632427 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 13 00:55:44.632818 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 13 00:55:44.632832 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 13 00:55:44.632842 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 13 00:55:44.632852 kernel: ACPI: Added _OSI(Module Device) Mar 13 00:55:44.632862 kernel: ACPI: Added _OSI(Processor Device) Mar 13 00:55:44.632871 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 13 00:55:44.632881 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 13 00:55:44.632892 kernel: ACPI: Interpreter enabled Mar 13 00:55:44.632905 kernel: ACPI: PM: (supports S0 S3 S5) Mar 13 00:55:44.632919 kernel: ACPI: Using IOAPIC for interrupt routing Mar 13 00:55:44.632929 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 13 00:55:44.632939 kernel: PCI: Using E820 reservations for host bridge windows Mar 13 00:55:44.632949 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 13 00:55:44.632959 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 13 00:55:44.635192 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 13 00:55:44.635402 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 13 00:55:44.636110 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 13 00:55:44.636132 kernel: PCI host bridge to bus 0000:00 Mar 13 00:55:44.636986 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 13 00:55:44.637166 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 13 00:55:44.637338 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 13 00:55:44.637869 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Mar 13 00:55:44.638056 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Mar 13 00:55:44.638205 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Mar 13 00:55:44.638343 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 13 00:55:44.639215 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Mar 13 00:55:44.639827 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Mar 13 00:55:44.639983 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Mar 13 00:55:44.640125 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Mar 13 00:55:44.640263 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Mar 13 00:55:44.640408 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 13 00:55:44.640942 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 18554 usecs Mar 13 00:55:44.641285 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Mar 13 00:55:44.641431 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Mar 13 00:55:44.641957 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Mar 13 00:55:44.642099 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Mar 13 00:55:44.642428 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Mar 13 00:55:44.642964 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Mar 13 00:55:44.643107 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Mar 13 00:55:44.643248 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Mar 13 00:55:44.644336 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Mar 13 00:55:44.645007 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Mar 13 00:55:44.645861 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Mar 13 00:55:44.646241 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Mar 13 00:55:44.646432 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Mar 13 00:55:44.647314 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Mar 13 00:55:44.647896 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 13 00:55:44.648096 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 18554 usecs Mar 13 00:55:44.648949 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Mar 13 00:55:44.649148 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Mar 13 00:55:44.649352 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Mar 13 00:55:44.650064 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Mar 13 00:55:44.650262 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Mar 13 00:55:44.650278 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 13 00:55:44.650291 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 13 00:55:44.650304 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 13 00:55:44.650316 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 13 00:55:44.650328 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 13 00:55:44.650345 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 13 00:55:44.650355 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 13 00:55:44.650365 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 13 00:55:44.650374 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 13 00:55:44.650384 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 13 00:55:44.650394 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 13 00:55:44.650407 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 13 00:55:44.650419 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 13 00:55:44.650787 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 13 00:55:44.650807 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 13 00:55:44.650820 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 13 00:55:44.650831 kernel: iommu: Default domain type: Translated Mar 13 00:55:44.650844 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 13 00:55:44.650854 kernel: efivars: Registered efivars operations Mar 13 00:55:44.650864 kernel: PCI: Using ACPI for IRQ routing Mar 13 00:55:44.650874 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 13 00:55:44.650884 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Mar 13 00:55:44.650893 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Mar 13 00:55:44.650908 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Mar 13 00:55:44.650918 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Mar 13 00:55:44.650930 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Mar 13 00:55:44.650942 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Mar 13 00:55:44.650955 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Mar 13 00:55:44.650965 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Mar 13 00:55:44.651159 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 13 00:55:44.651345 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 13 00:55:44.651939 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 13 00:55:44.651959 kernel: vgaarb: loaded Mar 13 00:55:44.651973 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 13 00:55:44.651983 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 13 00:55:44.651993 kernel: clocksource: Switched to clocksource kvm-clock Mar 13 00:55:44.652003 kernel: VFS: Disk quotas dquot_6.6.0 Mar 13 00:55:44.652012 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 13 00:55:44.652022 kernel: pnp: PnP ACPI init Mar 13 00:55:44.653083 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Mar 13 00:55:44.653107 kernel: pnp: PnP ACPI: found 6 devices Mar 13 00:55:44.653118 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 13 00:55:44.653130 kernel: NET: Registered PF_INET protocol family Mar 13 00:55:44.653143 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 13 00:55:44.653155 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 13 00:55:44.653193 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 13 00:55:44.653207 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 13 00:55:44.653218 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 13 00:55:44.653231 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 13 00:55:44.653242 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 13 00:55:44.653255 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 13 00:55:44.653267 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 13 00:55:44.653278 kernel: NET: Registered PF_XDP protocol family Mar 13 00:55:44.653843 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Mar 13 00:55:44.654041 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Mar 13 00:55:44.654233 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 13 00:55:44.654424 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 13 00:55:44.654986 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 13 00:55:44.655170 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Mar 13 00:55:44.655350 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Mar 13 00:55:44.655908 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Mar 13 00:55:44.655926 kernel: PCI: CLS 0 bytes, default 64 Mar 13 00:55:44.655938 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 13 00:55:44.655948 kernel: Initialise system trusted keyrings Mar 13 00:55:44.655958 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 13 00:55:44.655975 kernel: Key type asymmetric registered Mar 13 00:55:44.655986 kernel: Asymmetric key parser 'x509' registered Mar 13 00:55:44.656000 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 13 00:55:44.656010 kernel: io scheduler mq-deadline registered Mar 13 00:55:44.656021 kernel: io scheduler kyber registered Mar 13 00:55:44.656031 kernel: io scheduler bfq registered Mar 13 00:55:44.656041 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 13 00:55:44.656056 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 13 00:55:44.656070 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 13 00:55:44.656083 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 13 00:55:44.656096 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 13 00:55:44.656109 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 13 00:55:44.656120 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 13 00:55:44.656131 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 13 00:55:44.656146 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 13 00:55:44.657108 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 13 00:55:44.657128 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Mar 13 00:55:44.657313 kernel: rtc_cmos 00:04: registered as rtc0 Mar 13 00:55:44.657905 kernel: rtc_cmos 00:04: setting system clock to 2026-03-13T00:55:42 UTC (1773363342) Mar 13 00:55:44.658093 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Mar 13 00:55:44.658109 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 13 00:55:44.658120 kernel: efifb: probing for efifb Mar 13 00:55:44.658136 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Mar 13 00:55:44.658147 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Mar 13 00:55:44.658157 kernel: efifb: scrolling: redraw Mar 13 00:55:44.658169 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 13 00:55:44.658182 kernel: Console: switching to colour frame buffer device 160x50 Mar 13 00:55:44.658195 kernel: fb0: EFI VGA frame buffer device Mar 13 00:55:44.658205 kernel: pstore: Using crash dump compression: deflate Mar 13 00:55:44.658216 kernel: pstore: Registered efi_pstore as persistent store backend Mar 13 00:55:44.658226 kernel: NET: Registered PF_INET6 protocol family Mar 13 00:55:44.658241 kernel: Segment Routing with IPv6 Mar 13 00:55:44.658251 kernel: In-situ OAM (IOAM) with IPv6 Mar 13 00:55:44.658261 kernel: NET: Registered PF_PACKET protocol family Mar 13 00:55:44.658275 kernel: Key type dns_resolver registered Mar 13 00:55:44.658286 kernel: IPI shorthand broadcast: enabled Mar 13 00:55:44.658297 kernel: sched_clock: Marking stable (12675311379, 2814064503)->(16765511475, -1276135593) Mar 13 00:55:44.658307 kernel: registered taskstats version 1 Mar 13 00:55:44.658317 kernel: Loading compiled-in X.509 certificates Mar 13 00:55:44.658327 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: 5aff49df330f42445474818d085d5033fee752d8' Mar 13 00:55:44.658341 kernel: Demotion targets for Node 0: null Mar 13 00:55:44.658354 kernel: Key type .fscrypt registered Mar 13 00:55:44.658366 kernel: Key type fscrypt-provisioning registered Mar 13 00:55:44.658377 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 13 00:55:44.658387 kernel: ima: Allocated hash algorithm: sha1 Mar 13 00:55:44.658397 kernel: ima: No architecture policies found Mar 13 00:55:44.658407 kernel: clk: Disabling unused clocks Mar 13 00:55:44.658417 kernel: Warning: unable to open an initial console. Mar 13 00:55:44.658428 kernel: Freeing unused kernel image (initmem) memory: 46200K Mar 13 00:55:44.658822 kernel: Write protecting the kernel read-only data: 40960k Mar 13 00:55:44.658835 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Mar 13 00:55:44.658845 kernel: Run /init as init process Mar 13 00:55:44.658855 kernel: with arguments: Mar 13 00:55:44.658866 kernel: /init Mar 13 00:55:44.658879 kernel: with environment: Mar 13 00:55:44.658891 kernel: HOME=/ Mar 13 00:55:44.658904 kernel: TERM=linux Mar 13 00:55:44.658916 systemd[1]: Successfully made /usr/ read-only. Mar 13 00:55:44.658934 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 13 00:55:44.658946 systemd[1]: Detected virtualization kvm. Mar 13 00:55:44.658956 systemd[1]: Detected architecture x86-64. Mar 13 00:55:44.658966 systemd[1]: Running in initrd. Mar 13 00:55:44.658978 systemd[1]: No hostname configured, using default hostname. Mar 13 00:55:44.658992 systemd[1]: Hostname set to . Mar 13 00:55:44.659005 systemd[1]: Initializing machine ID from VM UUID. Mar 13 00:55:44.659022 systemd[1]: Queued start job for default target initrd.target. Mar 13 00:55:44.659033 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:55:44.659044 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:55:44.659055 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 13 00:55:44.659066 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 13 00:55:44.659077 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 13 00:55:44.659091 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 13 00:55:44.659110 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 13 00:55:44.659124 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 13 00:55:44.659135 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:55:44.659146 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:55:44.659156 systemd[1]: Reached target paths.target - Path Units. Mar 13 00:55:44.659167 systemd[1]: Reached target slices.target - Slice Units. Mar 13 00:55:44.659178 systemd[1]: Reached target swap.target - Swaps. Mar 13 00:55:44.659189 systemd[1]: Reached target timers.target - Timer Units. Mar 13 00:55:44.659207 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 13 00:55:44.659220 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 13 00:55:44.659233 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 13 00:55:44.659244 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 13 00:55:44.659254 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:55:44.659265 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 13 00:55:44.659276 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:55:44.659287 systemd[1]: Reached target sockets.target - Socket Units. Mar 13 00:55:44.659299 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 13 00:55:44.659321 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 13 00:55:44.659334 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 13 00:55:44.659346 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 13 00:55:44.659357 systemd[1]: Starting systemd-fsck-usr.service... Mar 13 00:55:44.659367 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 13 00:55:44.659378 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 13 00:55:44.659389 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:55:44.659796 systemd-journald[202]: Collecting audit messages is disabled. Mar 13 00:55:44.659836 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 13 00:55:44.659856 systemd-journald[202]: Journal started Mar 13 00:55:44.659878 systemd-journald[202]: Runtime Journal (/run/log/journal/1f0ef089c83748e99d1a123a490e4341) is 6M, max 48.1M, 42.1M free. Mar 13 00:55:44.662108 systemd-modules-load[204]: Inserted module 'overlay' Mar 13 00:55:44.689314 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:55:44.715995 systemd[1]: Started systemd-journald.service - Journal Service. Mar 13 00:55:44.729795 systemd[1]: Finished systemd-fsck-usr.service. Mar 13 00:55:44.763998 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 13 00:55:44.836995 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 13 00:55:44.841124 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 13 00:55:44.874262 kernel: Bridge firewalling registered Mar 13 00:55:44.867775 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:55:44.901377 systemd-modules-load[204]: Inserted module 'br_netfilter' Mar 13 00:55:44.903294 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 13 00:55:44.943993 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 13 00:55:44.986869 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 00:55:45.002033 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 13 00:55:45.015415 systemd-tmpfiles[218]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 13 00:55:45.032857 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 13 00:55:45.058968 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:55:45.089773 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 00:55:45.121347 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 13 00:55:45.174194 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:55:45.196165 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:55:45.207326 dracut-cmdline[238]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:55:45.223853 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 13 00:55:45.388033 systemd-resolved[260]: Positive Trust Anchors: Mar 13 00:55:45.388146 systemd-resolved[260]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 13 00:55:45.388171 systemd-resolved[260]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 13 00:55:45.392319 systemd-resolved[260]: Defaulting to hostname 'linux'. Mar 13 00:55:45.398134 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 13 00:55:45.424396 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:55:45.584039 kernel: SCSI subsystem initialized Mar 13 00:55:45.607950 kernel: Loading iSCSI transport class v2.0-870. Mar 13 00:55:45.646984 kernel: iscsi: registered transport (tcp) Mar 13 00:55:45.702095 kernel: iscsi: registered transport (qla4xxx) Mar 13 00:55:45.702240 kernel: QLogic iSCSI HBA Driver Mar 13 00:55:45.784113 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 13 00:55:45.843409 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:55:45.850160 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 13 00:55:46.018133 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 13 00:55:46.026394 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 13 00:55:46.200019 kernel: raid6: avx2x4 gen() 21111 MB/s Mar 13 00:55:46.222936 kernel: raid6: avx2x2 gen() 17627 MB/s Mar 13 00:55:46.251843 kernel: raid6: avx2x1 gen() 14308 MB/s Mar 13 00:55:46.251890 kernel: raid6: using algorithm avx2x4 gen() 21111 MB/s Mar 13 00:55:46.282975 kernel: raid6: .... xor() 3410 MB/s, rmw enabled Mar 13 00:55:46.283021 kernel: raid6: using avx2x2 recovery algorithm Mar 13 00:55:46.321882 kernel: xor: automatically using best checksumming function avx Mar 13 00:55:46.727991 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 13 00:55:46.752970 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 13 00:55:46.759362 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:55:46.854278 systemd-udevd[453]: Using default interface naming scheme 'v255'. Mar 13 00:55:46.865137 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:55:46.888320 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 13 00:55:46.990322 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Mar 13 00:55:47.105264 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 13 00:55:47.137180 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 13 00:55:47.368055 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:55:47.372202 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 13 00:55:47.520900 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 13 00:55:47.583922 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:55:47.591246 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 13 00:55:47.584269 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:55:47.620316 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 13 00:55:47.620349 kernel: GPT:9289727 != 19775487 Mar 13 00:55:47.620367 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 13 00:55:47.631625 kernel: GPT:9289727 != 19775487 Mar 13 00:55:47.641249 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 13 00:55:47.652787 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 13 00:55:47.671109 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:55:47.704276 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:55:47.720955 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:55:47.769067 kernel: libata version 3.00 loaded. Mar 13 00:55:47.769111 kernel: cryptd: max_cpu_qlen set to 1000 Mar 13 00:55:47.817061 kernel: ahci 0000:00:1f.2: version 3.0 Mar 13 00:55:47.818418 kernel: AES CTR mode by8 optimization enabled Mar 13 00:55:47.832409 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 13 00:55:47.925954 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Mar 13 00:55:47.926916 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Mar 13 00:55:47.927105 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 13 00:55:47.960003 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 13 00:55:48.019372 kernel: scsi host0: ahci Mar 13 00:55:47.979292 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:55:48.041799 kernel: scsi host1: ahci Mar 13 00:55:48.060255 kernel: scsi host2: ahci Mar 13 00:55:48.066876 kernel: scsi host3: ahci Mar 13 00:55:48.029979 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 13 00:55:48.139113 kernel: scsi host4: ahci Mar 13 00:55:48.139346 kernel: scsi host5: ahci Mar 13 00:55:48.139855 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Mar 13 00:55:48.139868 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Mar 13 00:55:48.139879 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Mar 13 00:55:48.139889 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Mar 13 00:55:48.139898 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 13 00:55:48.139909 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Mar 13 00:55:48.139924 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Mar 13 00:55:48.158840 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 13 00:55:48.216399 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 13 00:55:48.245884 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 13 00:55:48.249401 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 13 00:55:48.340180 disk-uuid[623]: Primary Header is updated. Mar 13 00:55:48.340180 disk-uuid[623]: Secondary Entries is updated. Mar 13 00:55:48.340180 disk-uuid[623]: Secondary Header is updated. Mar 13 00:55:48.385847 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 13 00:55:48.385870 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 13 00:55:48.467840 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 13 00:55:48.467894 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 13 00:55:48.489996 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 13 00:55:48.502104 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 13 00:55:48.524865 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 13 00:55:48.524917 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 13 00:55:48.541796 kernel: ata3.00: LPM support broken, forcing max_power Mar 13 00:55:48.541832 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 13 00:55:48.541845 kernel: ata3.00: applying bridge limits Mar 13 00:55:48.563740 kernel: ata3.00: LPM support broken, forcing max_power Mar 13 00:55:48.563772 kernel: ata3.00: configured for UDMA/100 Mar 13 00:55:48.584816 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 13 00:55:48.689077 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 13 00:55:48.689398 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 13 00:55:48.725195 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 13 00:55:49.275153 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 13 00:55:49.291847 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 13 00:55:49.305059 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:55:49.333207 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 13 00:55:49.360985 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 13 00:55:49.404955 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 13 00:55:49.406900 disk-uuid[624]: The operation has completed successfully. Mar 13 00:55:49.481050 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 13 00:55:49.527232 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 13 00:55:49.527915 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 13 00:55:49.542302 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 13 00:55:49.605366 sh[652]: Success Mar 13 00:55:49.681710 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 13 00:55:49.681764 kernel: device-mapper: uevent: version 1.0.3 Mar 13 00:55:49.696040 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 13 00:55:49.743934 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Mar 13 00:55:49.840388 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 13 00:55:49.887905 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 13 00:55:49.908301 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 13 00:55:49.979888 kernel: BTRFS: device fsid 503642f8-c59c-4168-97a8-9c3603183fa3 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (664) Mar 13 00:55:49.979921 kernel: BTRFS info (device dm-0): first mount of filesystem 503642f8-c59c-4168-97a8-9c3603183fa3 Mar 13 00:55:49.979936 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:55:50.039272 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 13 00:55:50.039312 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 13 00:55:50.044055 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 13 00:55:50.045935 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 13 00:55:50.094107 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 13 00:55:50.096410 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 13 00:55:50.160413 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 13 00:55:50.253865 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (691) Mar 13 00:55:50.279056 kernel: BTRFS info (device vda6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:55:50.279125 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:55:50.318383 kernel: BTRFS info (device vda6): turning on async discard Mar 13 00:55:50.318431 kernel: BTRFS info (device vda6): enabling free space tree Mar 13 00:55:50.348988 kernel: BTRFS info (device vda6): last unmount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:55:50.356370 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 13 00:55:50.386967 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 13 00:55:50.652995 ignition[754]: Ignition 2.22.0 Mar 13 00:55:50.653013 ignition[754]: Stage: fetch-offline Mar 13 00:55:50.653058 ignition[754]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:55:50.653069 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 13 00:55:50.653156 ignition[754]: parsed url from cmdline: "" Mar 13 00:55:50.653160 ignition[754]: no config URL provided Mar 13 00:55:50.653166 ignition[754]: reading system config file "/usr/lib/ignition/user.ign" Mar 13 00:55:50.653175 ignition[754]: no config at "/usr/lib/ignition/user.ign" Mar 13 00:55:50.653199 ignition[754]: op(1): [started] loading QEMU firmware config module Mar 13 00:55:50.727843 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 13 00:55:50.653204 ignition[754]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 13 00:55:50.748853 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 13 00:55:50.684294 ignition[754]: op(1): [finished] loading QEMU firmware config module Mar 13 00:55:50.890948 systemd-networkd[841]: lo: Link UP Mar 13 00:55:50.891062 systemd-networkd[841]: lo: Gained carrier Mar 13 00:55:50.894413 systemd-networkd[841]: Enumeration completed Mar 13 00:55:50.895856 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 13 00:55:50.901393 systemd-networkd[841]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:55:50.901399 systemd-networkd[841]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 00:55:50.909826 systemd-networkd[841]: eth0: Link UP Mar 13 00:55:50.910109 systemd-networkd[841]: eth0: Gained carrier Mar 13 00:55:50.910120 systemd-networkd[841]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:55:50.924113 systemd[1]: Reached target network.target - Network. Mar 13 00:55:51.048786 systemd-networkd[841]: eth0: DHCPv4 address 10.0.0.147/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 13 00:55:51.363920 systemd-resolved[260]: Detected conflict on linux IN A 10.0.0.147 Mar 13 00:55:51.364036 systemd-resolved[260]: Hostname conflict, changing published hostname from 'linux' to 'linux4'. Mar 13 00:55:52.117219 ignition[754]: parsing config with SHA512: 8b7b573d87784540c1f5ca4a353c8942a88c3c106e00b29db0cfd9d3f8e8a6b204ef28d74b0e52f03b25582e9623cbf773c95d88ebd34201f57c728ac2ef1db2 Mar 13 00:55:52.182831 unknown[754]: fetched base config from "system" Mar 13 00:55:52.182943 unknown[754]: fetched user config from "qemu" Mar 13 00:55:52.184134 ignition[754]: fetch-offline: fetch-offline passed Mar 13 00:55:52.189323 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 13 00:55:52.184220 ignition[754]: Ignition finished successfully Mar 13 00:55:52.210211 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 13 00:55:52.213174 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 13 00:55:52.429316 ignition[846]: Ignition 2.22.0 Mar 13 00:55:52.429948 ignition[846]: Stage: kargs Mar 13 00:55:52.430095 ignition[846]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:55:52.442883 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 13 00:55:52.430108 ignition[846]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 13 00:55:52.467103 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 13 00:55:52.432858 ignition[846]: kargs: kargs passed Mar 13 00:55:52.432910 ignition[846]: Ignition finished successfully Mar 13 00:55:52.599990 ignition[854]: Ignition 2.22.0 Mar 13 00:55:52.600100 ignition[854]: Stage: disks Mar 13 00:55:52.600288 ignition[854]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:55:52.600307 ignition[854]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 13 00:55:52.602354 ignition[854]: disks: disks passed Mar 13 00:55:52.602400 ignition[854]: Ignition finished successfully Mar 13 00:55:52.662164 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 13 00:55:52.688145 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 13 00:55:52.693876 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 13 00:55:52.717164 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 13 00:55:52.741169 systemd[1]: Reached target sysinit.target - System Initialization. Mar 13 00:55:52.766047 systemd[1]: Reached target basic.target - Basic System. Mar 13 00:55:52.794828 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 13 00:55:52.846439 systemd-networkd[841]: eth0: Gained IPv6LL Mar 13 00:55:52.899399 systemd-fsck[864]: ROOT: clean, 15/553520 files, 52789/553472 blocks Mar 13 00:55:52.917255 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 13 00:55:52.952931 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 13 00:55:53.420823 kernel: EXT4-fs (vda9): mounted filesystem 26348f72-0225-4c06-aedc-823e61beebc6 r/w with ordered data mode. Quota mode: none. Mar 13 00:55:53.422039 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 13 00:55:53.434922 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 13 00:55:53.448175 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 13 00:55:53.472427 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 13 00:55:53.490008 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 13 00:55:53.490065 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 13 00:55:53.490102 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 13 00:55:53.589786 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 13 00:55:53.593163 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 13 00:55:53.625964 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (872) Mar 13 00:55:53.654405 kernel: BTRFS info (device vda6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:55:53.654875 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:55:53.707850 kernel: BTRFS info (device vda6): turning on async discard Mar 13 00:55:53.707893 kernel: BTRFS info (device vda6): enabling free space tree Mar 13 00:55:53.711306 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 13 00:55:53.738772 initrd-setup-root[897]: cut: /sysroot/etc/passwd: No such file or directory Mar 13 00:55:53.765984 initrd-setup-root[904]: cut: /sysroot/etc/group: No such file or directory Mar 13 00:55:53.799997 initrd-setup-root[911]: cut: /sysroot/etc/shadow: No such file or directory Mar 13 00:55:53.832972 initrd-setup-root[918]: cut: /sysroot/etc/gshadow: No such file or directory Mar 13 00:55:54.316106 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 13 00:55:54.319031 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 13 00:55:54.386888 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 13 00:55:54.426039 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 13 00:55:54.451373 kernel: BTRFS info (device vda6): last unmount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:55:54.508797 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 13 00:55:54.559794 ignition[987]: INFO : Ignition 2.22.0 Mar 13 00:55:54.559794 ignition[987]: INFO : Stage: mount Mar 13 00:55:54.582081 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:55:54.582081 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 13 00:55:54.582081 ignition[987]: INFO : mount: mount passed Mar 13 00:55:54.582081 ignition[987]: INFO : Ignition finished successfully Mar 13 00:55:54.583295 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 13 00:55:54.653341 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 13 00:55:54.727838 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 13 00:55:54.787945 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (999) Mar 13 00:55:54.809128 kernel: BTRFS info (device vda6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:55:54.809160 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:55:54.848002 kernel: BTRFS info (device vda6): turning on async discard Mar 13 00:55:54.848049 kernel: BTRFS info (device vda6): enabling free space tree Mar 13 00:55:54.853953 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 13 00:55:55.009376 ignition[1016]: INFO : Ignition 2.22.0 Mar 13 00:55:55.009376 ignition[1016]: INFO : Stage: files Mar 13 00:55:55.027785 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:55:55.027785 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 13 00:55:55.027785 ignition[1016]: DEBUG : files: compiled without relabeling support, skipping Mar 13 00:55:55.074219 ignition[1016]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 13 00:55:55.074219 ignition[1016]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 13 00:55:55.124283 ignition[1016]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 13 00:55:55.143172 ignition[1016]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 13 00:55:55.164246 unknown[1016]: wrote ssh authorized keys file for user: core Mar 13 00:55:55.176229 ignition[1016]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 13 00:55:55.208413 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 13 00:55:55.208413 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 13 00:55:55.305884 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 13 00:55:55.420244 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 13 00:55:55.420244 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 13 00:55:55.464899 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 13 00:55:55.464899 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 13 00:55:55.464899 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 13 00:55:55.464899 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 13 00:55:55.464899 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 13 00:55:55.464899 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 13 00:55:55.596377 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 13 00:55:55.596377 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 13 00:55:55.596377 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 13 00:55:55.596377 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 13 00:55:55.596377 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 13 00:55:55.596377 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 13 00:55:55.596377 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 13 00:55:55.887745 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 13 00:55:56.591885 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 13 00:55:56.591885 ignition[1016]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 13 00:55:56.636090 ignition[1016]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 13 00:55:56.665108 ignition[1016]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 13 00:55:56.665108 ignition[1016]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 13 00:55:56.665108 ignition[1016]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 13 00:55:56.665108 ignition[1016]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 13 00:55:56.744945 ignition[1016]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 13 00:55:56.744945 ignition[1016]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 13 00:55:56.744945 ignition[1016]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 13 00:55:56.802876 ignition[1016]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 13 00:55:56.802876 ignition[1016]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 13 00:55:56.802876 ignition[1016]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 13 00:55:56.802876 ignition[1016]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 13 00:55:56.802876 ignition[1016]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 13 00:55:56.802876 ignition[1016]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 13 00:55:56.802876 ignition[1016]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 13 00:55:56.802876 ignition[1016]: INFO : files: files passed Mar 13 00:55:56.802876 ignition[1016]: INFO : Ignition finished successfully Mar 13 00:55:56.880263 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 13 00:55:56.915087 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 13 00:55:56.996153 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 13 00:55:57.013320 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 13 00:55:57.013984 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 13 00:55:57.096989 initrd-setup-root-after-ignition[1045]: grep: /sysroot/oem/oem-release: No such file or directory Mar 13 00:55:57.112178 initrd-setup-root-after-ignition[1047]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:55:57.112178 initrd-setup-root-after-ignition[1047]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:55:57.144699 initrd-setup-root-after-ignition[1051]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:55:57.118212 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 13 00:55:57.135069 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 13 00:55:57.158080 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 13 00:55:57.380211 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 13 00:55:57.380830 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 13 00:55:57.407235 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 13 00:55:57.421068 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 13 00:55:57.443873 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 13 00:55:57.482193 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 13 00:55:57.597039 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 13 00:55:57.636274 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 13 00:55:57.721024 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:55:57.735344 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:55:57.757981 systemd[1]: Stopped target timers.target - Timer Units. Mar 13 00:55:57.770283 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 13 00:55:57.770951 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 13 00:55:57.780855 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 13 00:55:57.793209 systemd[1]: Stopped target basic.target - Basic System. Mar 13 00:55:57.808849 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 13 00:55:57.810952 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 13 00:55:58.227001 ignition[1071]: INFO : Ignition 2.22.0 Mar 13 00:55:58.227001 ignition[1071]: INFO : Stage: umount Mar 13 00:55:58.227001 ignition[1071]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:55:58.227001 ignition[1071]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 13 00:55:58.227001 ignition[1071]: INFO : umount: umount passed Mar 13 00:55:58.227001 ignition[1071]: INFO : Ignition finished successfully Mar 13 00:55:57.822219 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 13 00:55:57.833109 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 13 00:55:57.844176 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 13 00:55:58.451373 kernel: hrtimer: interrupt took 2787705 ns Mar 13 00:55:57.855154 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 13 00:55:57.866110 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 13 00:55:57.879356 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 13 00:55:57.891064 systemd[1]: Stopped target swap.target - Swaps. Mar 13 00:55:57.903064 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 13 00:55:57.903189 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 13 00:55:57.918282 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:55:57.921365 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:55:57.933222 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 13 00:55:57.934805 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:55:57.935027 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 13 00:55:57.935136 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 13 00:55:57.949932 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 13 00:55:57.950349 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 13 00:55:57.964170 systemd[1]: Stopped target paths.target - Path Units. Mar 13 00:55:57.976258 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 13 00:55:57.978099 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:55:57.990879 systemd[1]: Stopped target slices.target - Slice Units. Mar 13 00:55:58.003379 systemd[1]: Stopped target sockets.target - Socket Units. Mar 13 00:55:58.016106 systemd[1]: iscsid.socket: Deactivated successfully. Mar 13 00:55:58.016911 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 13 00:55:58.019944 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 13 00:55:58.020061 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 13 00:55:58.036152 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 13 00:55:58.036411 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 13 00:55:58.047413 systemd[1]: ignition-files.service: Deactivated successfully. Mar 13 00:55:58.047983 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 13 00:55:58.063203 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 13 00:55:58.075007 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 13 00:55:58.075292 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:55:58.132200 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 13 00:55:58.155248 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 13 00:55:58.155901 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:55:58.189972 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 13 00:55:58.190135 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 13 00:55:58.231395 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 13 00:55:58.231945 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 13 00:55:58.238897 systemd[1]: Stopped target network.target - Network. Mar 13 00:55:58.247291 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 13 00:55:58.247899 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 13 00:55:58.276012 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 13 00:55:58.276130 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 13 00:55:58.303152 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 13 00:55:58.303309 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 13 00:55:58.323318 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 13 00:55:58.323383 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 13 00:55:58.355996 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 13 00:55:58.380178 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 13 00:55:58.395351 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 13 00:55:58.399066 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 13 00:55:58.399285 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 13 00:55:58.472335 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 13 00:55:58.485022 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 13 00:55:58.485319 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 13 00:55:58.519264 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 13 00:55:58.519384 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:55:58.559007 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:55:58.559918 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 13 00:55:58.560087 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 13 00:55:58.624394 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 13 00:55:58.625212 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 13 00:55:58.625827 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 13 00:55:58.638834 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 13 00:55:58.660356 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 13 00:55:58.660416 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:55:58.686021 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 13 00:55:58.686112 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 13 00:55:58.746289 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 13 00:55:58.769138 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 13 00:55:58.769266 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 13 00:55:58.792946 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 13 00:55:58.793030 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:55:58.839950 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 13 00:55:58.840029 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 13 00:55:58.864420 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:55:58.877054 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 13 00:55:58.937921 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 13 00:55:58.938074 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 13 00:55:59.114950 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 13 00:55:59.116885 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:55:59.154893 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 13 00:55:59.155275 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 13 00:55:59.173337 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 13 00:55:59.173430 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:55:59.213335 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 13 00:55:59.214024 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 13 00:55:59.805405 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 13 00:55:59.806380 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 13 00:55:59.840259 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 13 00:55:59.840876 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 00:55:59.885388 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 13 00:55:59.910366 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 13 00:55:59.911061 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:55:59.937850 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 13 00:55:59.938240 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:55:59.987208 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:55:59.987416 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:56:00.041352 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Mar 13 00:56:00.041426 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 13 00:56:00.041850 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:56:00.138054 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 13 00:56:00.138413 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 13 00:56:00.177271 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 13 00:56:00.210163 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 13 00:56:00.297937 systemd[1]: Switching root. Mar 13 00:56:00.366890 systemd-journald[202]: Journal stopped Mar 13 00:56:06.167856 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Mar 13 00:56:06.168701 kernel: SELinux: policy capability network_peer_controls=1 Mar 13 00:56:06.168732 kernel: SELinux: policy capability open_perms=1 Mar 13 00:56:06.168745 kernel: SELinux: policy capability extended_socket_class=1 Mar 13 00:56:06.168756 kernel: SELinux: policy capability always_check_network=0 Mar 13 00:56:06.168767 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 13 00:56:06.168783 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 13 00:56:06.168794 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 13 00:56:06.168805 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 13 00:56:06.168816 kernel: SELinux: policy capability userspace_initial_context=0 Mar 13 00:56:06.168829 kernel: audit: type=1403 audit(1773363360.824:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 13 00:56:06.168845 systemd[1]: Successfully loaded SELinux policy in 183.874ms. Mar 13 00:56:06.168864 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 16.856ms. Mar 13 00:56:06.168877 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 13 00:56:06.168890 systemd[1]: Detected virtualization kvm. Mar 13 00:56:06.168902 systemd[1]: Detected architecture x86-64. Mar 13 00:56:06.168913 systemd[1]: Detected first boot. Mar 13 00:56:06.168930 systemd[1]: Initializing machine ID from VM UUID. Mar 13 00:56:06.168942 zram_generator::config[1116]: No configuration found. Mar 13 00:56:06.168958 kernel: Guest personality initialized and is inactive Mar 13 00:56:06.168970 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Mar 13 00:56:06.168981 kernel: Initialized host personality Mar 13 00:56:06.168992 kernel: NET: Registered PF_VSOCK protocol family Mar 13 00:56:06.169004 systemd[1]: Populated /etc with preset unit settings. Mar 13 00:56:06.169017 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 13 00:56:06.169029 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 13 00:56:06.169040 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 13 00:56:06.169053 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 13 00:56:06.169067 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 13 00:56:06.169079 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 13 00:56:06.169091 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 13 00:56:06.169104 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 13 00:56:06.169116 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 13 00:56:06.169128 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 13 00:56:06.169140 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 13 00:56:06.169152 systemd[1]: Created slice user.slice - User and Session Slice. Mar 13 00:56:06.169167 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:56:06.169179 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:56:06.169191 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 13 00:56:06.169203 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 13 00:56:06.169215 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 13 00:56:06.169228 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 13 00:56:06.169240 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 13 00:56:06.169251 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:56:06.169266 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:56:06.169278 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 13 00:56:06.169290 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 13 00:56:06.169302 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 13 00:56:06.169313 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 13 00:56:06.169325 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:56:06.169337 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 13 00:56:06.169349 systemd[1]: Reached target slices.target - Slice Units. Mar 13 00:56:06.169362 systemd[1]: Reached target swap.target - Swaps. Mar 13 00:56:06.169376 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 13 00:56:06.169389 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 13 00:56:06.169402 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 13 00:56:06.169413 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:56:06.169425 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 13 00:56:06.169720 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:56:06.169736 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 13 00:56:06.169748 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 13 00:56:06.169760 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 13 00:56:06.169776 systemd[1]: Mounting media.mount - External Media Directory... Mar 13 00:56:06.169795 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:56:06.169807 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 13 00:56:06.169818 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 13 00:56:06.169830 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 13 00:56:06.169843 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 13 00:56:06.169855 systemd[1]: Reached target machines.target - Containers. Mar 13 00:56:06.169867 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 13 00:56:06.169879 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:56:06.169894 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 13 00:56:06.169905 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 13 00:56:06.169919 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:56:06.169931 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 13 00:56:06.169943 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 00:56:06.169955 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 13 00:56:06.169967 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 00:56:06.169980 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 13 00:56:06.169994 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 13 00:56:06.170006 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 13 00:56:06.170018 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 13 00:56:06.170029 systemd[1]: Stopped systemd-fsck-usr.service. Mar 13 00:56:06.170041 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:56:06.170053 kernel: ACPI: bus type drm_connector registered Mar 13 00:56:06.170065 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 13 00:56:06.170076 kernel: fuse: init (API version 7.41) Mar 13 00:56:06.170088 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 13 00:56:06.170102 kernel: loop: module loaded Mar 13 00:56:06.170113 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 13 00:56:06.170152 systemd-journald[1201]: Collecting audit messages is disabled. Mar 13 00:56:06.170178 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 13 00:56:06.170191 systemd-journald[1201]: Journal started Mar 13 00:56:06.170211 systemd-journald[1201]: Runtime Journal (/run/log/journal/1f0ef089c83748e99d1a123a490e4341) is 6M, max 48.1M, 42.1M free. Mar 13 00:56:03.234899 systemd[1]: Queued start job for default target multi-user.target. Mar 13 00:56:03.267319 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 13 00:56:03.271228 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 13 00:56:03.272316 systemd[1]: systemd-journald.service: Consumed 3.693s CPU time. Mar 13 00:56:06.241223 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 13 00:56:06.267946 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 13 00:56:06.297205 systemd[1]: verity-setup.service: Deactivated successfully. Mar 13 00:56:06.297256 systemd[1]: Stopped verity-setup.service. Mar 13 00:56:06.297281 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:56:06.347916 systemd[1]: Started systemd-journald.service - Journal Service. Mar 13 00:56:06.349817 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 13 00:56:06.362310 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 13 00:56:06.375372 systemd[1]: Mounted media.mount - External Media Directory. Mar 13 00:56:06.388379 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 13 00:56:06.400931 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 13 00:56:06.413430 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 13 00:56:06.424785 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 13 00:56:06.438383 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:56:06.453046 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 13 00:56:06.454002 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 13 00:56:06.467358 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:56:06.468710 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:56:06.482001 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 13 00:56:06.482729 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 13 00:56:06.495344 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 00:56:06.496197 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 00:56:06.512238 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 13 00:56:06.513784 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 13 00:56:06.528317 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 00:56:06.529167 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 00:56:06.545410 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 13 00:56:06.563298 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:56:06.577854 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 13 00:56:06.596289 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 13 00:56:06.622064 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:56:06.664278 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 13 00:56:06.683168 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 13 00:56:06.712185 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 13 00:56:06.725043 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 13 00:56:06.725085 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 13 00:56:06.739024 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 13 00:56:06.755914 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 13 00:56:06.767276 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:56:06.787088 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 13 00:56:06.805123 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 13 00:56:06.822232 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 13 00:56:06.834286 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 13 00:56:06.849035 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 13 00:56:06.851700 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 00:56:06.869148 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 13 00:56:06.904836 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 13 00:56:06.922309 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 13 00:56:06.956034 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 13 00:56:07.261012 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 13 00:56:07.276210 systemd-journald[1201]: Time spent on flushing to /var/log/journal/1f0ef089c83748e99d1a123a490e4341 is 66.181ms for 1069 entries. Mar 13 00:56:07.276210 systemd-journald[1201]: System Journal (/var/log/journal/1f0ef089c83748e99d1a123a490e4341) is 8M, max 195.6M, 187.6M free. Mar 13 00:56:07.411356 systemd-journald[1201]: Received client request to flush runtime journal. Mar 13 00:56:07.411401 kernel: loop0: detected capacity change from 0 to 128560 Mar 13 00:56:07.276103 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 13 00:56:07.311389 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 13 00:56:07.393791 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:56:07.414672 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 13 00:56:07.448785 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 13 00:56:07.506167 kernel: loop1: detected capacity change from 0 to 110984 Mar 13 00:56:07.505368 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 13 00:56:07.525329 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 13 00:56:07.542980 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 13 00:56:07.545138 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 13 00:56:08.024958 kernel: loop2: detected capacity change from 0 to 219192 Mar 13 00:56:08.128804 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Mar 13 00:56:08.128825 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Mar 13 00:56:08.151697 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:56:08.209041 kernel: loop3: detected capacity change from 0 to 128560 Mar 13 00:56:08.265850 kernel: loop4: detected capacity change from 0 to 110984 Mar 13 00:56:09.456934 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1444826210 wd_nsec: 1444826063 Mar 13 00:56:09.531876 kernel: loop5: detected capacity change from 0 to 219192 Mar 13 00:56:09.627283 (sd-merge)[1259]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 13 00:56:09.629211 (sd-merge)[1259]: Merged extensions into '/usr'. Mar 13 00:56:09.666330 systemd[1]: Reload requested from client PID 1236 ('systemd-sysext') (unit systemd-sysext.service)... Mar 13 00:56:09.666692 systemd[1]: Reloading... Mar 13 00:56:10.644013 zram_generator::config[1281]: No configuration found. Mar 13 00:56:12.694181 systemd[1]: Reloading finished in 3026 ms. Mar 13 00:56:13.020788 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 13 00:56:13.065803 systemd[1]: Starting ensure-sysext.service... Mar 13 00:56:13.086201 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 13 00:56:13.139751 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 13 00:56:13.157745 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:56:13.175287 systemd[1]: Reload requested from client PID 1321 ('systemctl') (unit ensure-sysext.service)... Mar 13 00:56:13.175419 systemd[1]: Reloading... Mar 13 00:56:13.180104 ldconfig[1231]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 13 00:56:13.250245 systemd-tmpfiles[1322]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 13 00:56:13.250964 systemd-tmpfiles[1322]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 13 00:56:13.254099 systemd-tmpfiles[1322]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 13 00:56:13.255092 systemd-tmpfiles[1322]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 13 00:56:13.258065 systemd-tmpfiles[1322]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 13 00:56:13.259301 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. Mar 13 00:56:13.259376 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. Mar 13 00:56:13.268033 systemd-udevd[1324]: Using default interface naming scheme 'v255'. Mar 13 00:56:13.306272 systemd-tmpfiles[1322]: Detected autofs mount point /boot during canonicalization of boot. Mar 13 00:56:13.306402 systemd-tmpfiles[1322]: Skipping /boot Mar 13 00:56:13.440119 zram_generator::config[1346]: No configuration found. Mar 13 00:56:13.457428 systemd-tmpfiles[1322]: Detected autofs mount point /boot during canonicalization of boot. Mar 13 00:56:13.458144 systemd-tmpfiles[1322]: Skipping /boot Mar 13 00:56:14.236052 kernel: mousedev: PS/2 mouse device common for all mice Mar 13 00:56:14.237094 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Mar 13 00:56:14.266088 kernel: ACPI: button: Power Button [PWRF] Mar 13 00:56:14.272353 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 13 00:56:14.275252 systemd[1]: Reloading finished in 1098 ms. Mar 13 00:56:14.309959 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 13 00:56:14.315195 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:56:14.323033 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 13 00:56:14.344414 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 13 00:56:14.373008 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 13 00:56:14.463797 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:56:14.583314 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 13 00:56:14.621045 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 13 00:56:14.645817 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 13 00:56:14.670877 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 13 00:56:14.725172 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 13 00:56:14.753863 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 13 00:56:14.807395 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 13 00:56:14.841168 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 13 00:56:14.905077 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:56:14.905274 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:56:14.924221 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:56:14.966089 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 00:56:16.895099 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 00:56:16.976120 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:56:16.976986 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:56:16.977332 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:56:16.984001 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 13 00:56:17.045289 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:56:17.046185 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:56:17.115965 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 00:56:17.119338 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 00:56:17.138823 systemd[1]: modprobe@loop.service: Consumed 1.910s CPU time, 1.6M memory peak. Mar 13 00:56:17.144167 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 00:56:17.145212 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 00:56:17.166335 systemd[1]: modprobe@efi_pstore.service: Consumed 1.772s CPU time, 1.6M memory peak. Mar 13 00:56:17.224768 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 13 00:56:17.384266 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:56:17.388238 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:56:17.416224 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:56:17.443368 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 00:56:17.465266 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 00:56:17.582852 augenrules[1479]: No rules Mar 13 00:56:17.486965 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:56:17.493282 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:56:17.722350 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 13 00:56:17.785860 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 13 00:56:17.823871 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:56:17.852289 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:56:17.887313 systemd[1]: audit-rules.service: Deactivated successfully. Mar 13 00:56:17.892051 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 13 00:56:17.910855 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 13 00:56:17.938977 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:56:17.941225 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:56:18.002890 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 00:56:18.003216 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 00:56:18.078126 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 00:56:18.114353 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 00:56:18.158934 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 13 00:56:18.929312 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 13 00:56:18.981992 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 13 00:56:19.046375 systemd[1]: Finished ensure-sysext.service. Mar 13 00:56:19.049829 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:56:19.053823 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 13 00:56:19.058178 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:56:19.063040 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:56:19.067953 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 13 00:56:19.083960 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 00:56:19.104968 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 00:56:19.119901 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:56:19.119950 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:56:19.139851 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 13 00:56:19.140269 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 13 00:56:19.140302 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:56:19.223203 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 13 00:56:19.223980 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 13 00:56:19.225090 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 00:56:19.228039 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 00:56:19.248235 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:56:19.249027 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:56:19.249881 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 00:56:19.250179 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 00:56:19.275422 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 13 00:56:19.276021 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 13 00:56:19.436082 augenrules[1503]: /sbin/augenrules: No change Mar 13 00:56:19.622249 augenrules[1532]: No rules Mar 13 00:56:19.676399 systemd[1]: audit-rules.service: Deactivated successfully. Mar 13 00:56:19.697132 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 13 00:56:19.794363 kernel: kvm_amd: TSC scaling supported Mar 13 00:56:19.796804 kernel: kvm_amd: Nested Virtualization enabled Mar 13 00:56:19.796829 kernel: kvm_amd: Nested Paging enabled Mar 13 00:56:19.821000 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 13 00:56:19.821174 kernel: kvm_amd: PMU virtualization is disabled Mar 13 00:56:19.858109 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:56:20.232727 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 13 00:56:20.235721 systemd-networkd[1444]: lo: Link UP Mar 13 00:56:20.236196 systemd-networkd[1444]: lo: Gained carrier Mar 13 00:56:20.239224 systemd[1]: Reached target time-set.target - System Time Set. Mar 13 00:56:20.239779 systemd-networkd[1444]: Enumeration completed Mar 13 00:56:20.241104 systemd-networkd[1444]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:56:20.241152 systemd-networkd[1444]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 00:56:20.242755 systemd-networkd[1444]: eth0: Link UP Mar 13 00:56:20.243286 systemd-networkd[1444]: eth0: Gained carrier Mar 13 00:56:20.243343 systemd-networkd[1444]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:56:20.244097 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 13 00:56:20.253368 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 13 00:56:20.261956 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 13 00:56:20.271674 systemd-networkd[1444]: eth0: DHCPv4 address 10.0.0.147/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 13 00:56:20.272703 systemd-timesyncd[1510]: Network configuration changed, trying to establish connection. Mar 13 00:56:21.175754 systemd-timesyncd[1510]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 13 00:56:21.176015 systemd-timesyncd[1510]: Initial clock synchronization to Fri 2026-03-13 00:56:21.175559 UTC. Mar 13 00:56:21.194560 kernel: EDAC MC: Ver: 3.0.0 Mar 13 00:56:21.196357 systemd-resolved[1447]: Positive Trust Anchors: Mar 13 00:56:21.196407 systemd-resolved[1447]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 13 00:56:21.196435 systemd-resolved[1447]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 13 00:56:21.206070 systemd-resolved[1447]: Defaulting to hostname 'linux'. Mar 13 00:56:21.208585 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 13 00:56:21.214324 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 13 00:56:21.220264 systemd[1]: Reached target network.target - Network. Mar 13 00:56:21.224129 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:56:21.228994 systemd[1]: Reached target sysinit.target - System Initialization. Mar 13 00:56:21.233739 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 13 00:56:21.239078 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 13 00:56:21.244310 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Mar 13 00:56:21.249776 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 13 00:56:21.254441 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 13 00:56:21.259884 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 13 00:56:21.265195 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 13 00:56:21.265279 systemd[1]: Reached target paths.target - Path Units. Mar 13 00:56:21.269059 systemd[1]: Reached target timers.target - Timer Units. Mar 13 00:56:21.275195 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 13 00:56:21.285094 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 13 00:56:21.298793 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 13 00:56:21.304946 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 13 00:56:21.310871 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 13 00:56:21.320551 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 13 00:56:21.325736 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 13 00:56:21.332872 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 13 00:56:21.338245 systemd[1]: Reached target sockets.target - Socket Units. Mar 13 00:56:21.342444 systemd[1]: Reached target basic.target - Basic System. Mar 13 00:56:21.346562 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 13 00:56:21.346776 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 13 00:56:21.348278 systemd[1]: Starting containerd.service - containerd container runtime... Mar 13 00:56:21.354560 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 13 00:56:21.370821 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 13 00:56:21.377701 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 13 00:56:21.393901 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 13 00:56:21.398914 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 13 00:56:21.400612 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Mar 13 00:56:21.408772 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 13 00:56:21.411247 jq[1551]: false Mar 13 00:56:21.415790 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 13 00:56:21.424044 google_oslogin_nss_cache[1553]: oslogin_cache_refresh[1553]: Refreshing passwd entry cache Mar 13 00:56:21.423245 oslogin_cache_refresh[1553]: Refreshing passwd entry cache Mar 13 00:56:21.424763 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 13 00:56:21.445604 extend-filesystems[1552]: Found /dev/vda6 Mar 13 00:56:21.450876 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 13 00:56:21.455814 google_oslogin_nss_cache[1553]: oslogin_cache_refresh[1553]: Failure getting users, quitting Mar 13 00:56:21.455814 google_oslogin_nss_cache[1553]: oslogin_cache_refresh[1553]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 13 00:56:21.455814 google_oslogin_nss_cache[1553]: oslogin_cache_refresh[1553]: Refreshing group entry cache Mar 13 00:56:21.454059 oslogin_cache_refresh[1553]: Failure getting users, quitting Mar 13 00:56:21.454081 oslogin_cache_refresh[1553]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 13 00:56:21.454241 oslogin_cache_refresh[1553]: Refreshing group entry cache Mar 13 00:56:21.473405 oslogin_cache_refresh[1553]: Failure getting groups, quitting Mar 13 00:56:21.475269 google_oslogin_nss_cache[1553]: oslogin_cache_refresh[1553]: Failure getting groups, quitting Mar 13 00:56:21.475269 google_oslogin_nss_cache[1553]: oslogin_cache_refresh[1553]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 13 00:56:21.473936 oslogin_cache_refresh[1553]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 13 00:56:21.477603 extend-filesystems[1552]: Found /dev/vda9 Mar 13 00:56:21.495998 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 13 00:56:21.519733 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 13 00:56:21.525875 extend-filesystems[1552]: Checking size of /dev/vda9 Mar 13 00:56:21.536702 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 13 00:56:21.543115 systemd[1]: Starting update-engine.service - Update Engine... Mar 13 00:56:21.556745 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 13 00:56:21.566234 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 13 00:56:21.570017 extend-filesystems[1552]: Resized partition /dev/vda9 Mar 13 00:56:21.574315 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 13 00:56:21.574746 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 13 00:56:21.575110 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Mar 13 00:56:21.575394 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Mar 13 00:56:21.578282 extend-filesystems[1579]: resize2fs 1.47.3 (8-Jul-2025) Mar 13 00:56:21.584434 systemd[1]: motdgen.service: Deactivated successfully. Mar 13 00:56:21.585881 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 13 00:56:21.602972 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 13 00:56:21.604722 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 13 00:56:21.629583 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 13 00:56:21.637025 (ntainerd)[1589]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 13 00:56:21.654815 tar[1580]: linux-amd64/LICENSE Mar 13 00:56:21.654815 tar[1580]: linux-amd64/helm Mar 13 00:56:21.669035 jq[1573]: true Mar 13 00:56:21.687342 systemd-logind[1563]: Watching system buttons on /dev/input/event2 (Power Button) Mar 13 00:56:21.687372 systemd-logind[1563]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 13 00:56:21.688813 systemd-logind[1563]: New seat seat0. Mar 13 00:56:21.694284 systemd[1]: Started systemd-logind.service - User Login Management. Mar 13 00:56:21.718584 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 13 00:56:21.748712 extend-filesystems[1579]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 13 00:56:21.748712 extend-filesystems[1579]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 13 00:56:21.748712 extend-filesystems[1579]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 13 00:56:21.796445 extend-filesystems[1552]: Resized filesystem in /dev/vda9 Mar 13 00:56:21.815063 update_engine[1570]: I20260313 00:56:21.771839 1570 main.cc:92] Flatcar Update Engine starting Mar 13 00:56:21.769978 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 13 00:56:21.918119 dbus-daemon[1549]: [system] SELinux support is enabled Mar 13 00:56:22.097086 update_engine[1570]: I20260313 00:56:22.080370 1570 update_check_scheduler.cc:74] Next update check in 11m53s Mar 13 00:56:22.081196 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 13 00:56:22.096560 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 13 00:56:22.111856 jq[1593]: true Mar 13 00:56:22.114545 sshd_keygen[1576]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 13 00:56:22.140972 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 13 00:56:22.144281 dbus-daemon[1549]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 13 00:56:22.145978 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 13 00:56:22.146080 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 13 00:56:22.151735 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 13 00:56:22.151759 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 13 00:56:22.157416 systemd[1]: Started update-engine.service - Update Engine. Mar 13 00:56:22.169041 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 13 00:56:22.198202 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 13 00:56:22.206555 systemd[1]: Started sshd@0-10.0.0.147:22-10.0.0.1:37834.service - OpenSSH per-connection server daemon (10.0.0.1:37834). Mar 13 00:56:22.220992 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 13 00:56:22.516692 systemd[1]: issuegen.service: Deactivated successfully. Mar 13 00:56:22.517336 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 13 00:56:22.533742 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 13 00:56:22.612434 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 13 00:56:22.619753 bash[1631]: Updated "/home/core/.ssh/authorized_keys" Mar 13 00:56:22.622274 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 13 00:56:22.634074 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 13 00:56:22.641144 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 13 00:56:22.647119 systemd[1]: Reached target getty.target - Login Prompts. Mar 13 00:56:22.653129 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 13 00:56:22.881962 systemd-networkd[1444]: eth0: Gained IPv6LL Mar 13 00:56:22.892154 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 13 00:56:22.899185 systemd[1]: Reached target network-online.target - Network is Online. Mar 13 00:56:22.910337 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 13 00:56:22.919902 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:56:22.927446 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 13 00:56:23.007134 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 13 00:56:23.240176 locksmithd[1617]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 13 00:56:23.257039 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 37834 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:56:23.263463 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 13 00:56:23.264165 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 13 00:56:23.264818 sshd-session[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:56:23.272771 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 13 00:56:23.287756 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 13 00:56:23.296357 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 13 00:56:23.332185 systemd-logind[1563]: New session 1 of user core. Mar 13 00:56:23.604404 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 13 00:56:23.617868 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 13 00:56:23.640738 (systemd)[1665]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 13 00:56:23.646243 systemd-logind[1563]: New session c1 of user core. Mar 13 00:56:24.105743 containerd[1589]: time="2026-03-13T00:56:24Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 13 00:56:24.109111 containerd[1589]: time="2026-03-13T00:56:24.109015186Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 13 00:56:24.226367 systemd[1665]: Queued start job for default target default.target. Mar 13 00:56:24.234288 systemd[1665]: Created slice app.slice - User Application Slice. Mar 13 00:56:24.234315 systemd[1665]: Reached target paths.target - Paths. Mar 13 00:56:24.234413 systemd[1665]: Reached target timers.target - Timers. Mar 13 00:56:24.238727 systemd[1665]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 13 00:56:24.260114 containerd[1589]: time="2026-03-13T00:56:24.259963617Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="395.899µs" Mar 13 00:56:24.260114 containerd[1589]: time="2026-03-13T00:56:24.260111503Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 13 00:56:24.260184 containerd[1589]: time="2026-03-13T00:56:24.260171435Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 13 00:56:24.262605 containerd[1589]: time="2026-03-13T00:56:24.260772427Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 13 00:56:24.262605 containerd[1589]: time="2026-03-13T00:56:24.260835295Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 13 00:56:24.262605 containerd[1589]: time="2026-03-13T00:56:24.261000964Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 13 00:56:24.262605 containerd[1589]: time="2026-03-13T00:56:24.261317786Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 13 00:56:24.262605 containerd[1589]: time="2026-03-13T00:56:24.261332533Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 13 00:56:24.262836 containerd[1589]: time="2026-03-13T00:56:24.262429221Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 13 00:56:24.262836 containerd[1589]: time="2026-03-13T00:56:24.262708281Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 13 00:56:24.263086 containerd[1589]: time="2026-03-13T00:56:24.262722718Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 13 00:56:24.263086 containerd[1589]: time="2026-03-13T00:56:24.263082219Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 13 00:56:24.264164 containerd[1589]: time="2026-03-13T00:56:24.263640752Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 13 00:56:24.266157 containerd[1589]: time="2026-03-13T00:56:24.265774566Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 13 00:56:24.266157 containerd[1589]: time="2026-03-13T00:56:24.265886535Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 13 00:56:24.266157 containerd[1589]: time="2026-03-13T00:56:24.265899028Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 13 00:56:24.266239 containerd[1589]: time="2026-03-13T00:56:24.266219537Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 13 00:56:24.270123 containerd[1589]: time="2026-03-13T00:56:24.270034238Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 13 00:56:24.270362 containerd[1589]: time="2026-03-13T00:56:24.270297791Z" level=info msg="metadata content store policy set" policy=shared Mar 13 00:56:24.277965 systemd[1665]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 13 00:56:24.278105 systemd[1665]: Reached target sockets.target - Sockets. Mar 13 00:56:24.278150 systemd[1665]: Reached target basic.target - Basic System. Mar 13 00:56:24.278200 systemd[1665]: Reached target default.target - Main User Target. Mar 13 00:56:24.278244 systemd[1665]: Startup finished in 609ms. Mar 13 00:56:24.278317 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 13 00:56:24.281734 containerd[1589]: time="2026-03-13T00:56:24.281012364Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 13 00:56:24.282845 containerd[1589]: time="2026-03-13T00:56:24.282209359Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 13 00:56:24.282845 containerd[1589]: time="2026-03-13T00:56:24.282369597Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 13 00:56:24.282845 containerd[1589]: time="2026-03-13T00:56:24.282469183Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 13 00:56:24.282845 containerd[1589]: time="2026-03-13T00:56:24.282719771Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 13 00:56:24.282845 containerd[1589]: time="2026-03-13T00:56:24.282790133Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 13 00:56:24.282952 containerd[1589]: time="2026-03-13T00:56:24.282929763Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 13 00:56:24.283860 containerd[1589]: time="2026-03-13T00:56:24.283007970Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 13 00:56:24.283860 containerd[1589]: time="2026-03-13T00:56:24.283072941Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 13 00:56:24.283860 containerd[1589]: time="2026-03-13T00:56:24.283084673Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 13 00:56:24.283860 containerd[1589]: time="2026-03-13T00:56:24.283094871Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 13 00:56:24.283860 containerd[1589]: time="2026-03-13T00:56:24.283176093Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 13 00:56:24.283860 containerd[1589]: time="2026-03-13T00:56:24.283747550Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 13 00:56:24.284381 containerd[1589]: time="2026-03-13T00:56:24.283990223Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 13 00:56:24.284381 containerd[1589]: time="2026-03-13T00:56:24.284062418Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 13 00:56:24.284381 containerd[1589]: time="2026-03-13T00:56:24.284075753Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 13 00:56:24.284381 containerd[1589]: time="2026-03-13T00:56:24.284149340Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 13 00:56:24.284381 containerd[1589]: time="2026-03-13T00:56:24.284161463Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 13 00:56:24.284381 containerd[1589]: time="2026-03-13T00:56:24.284237675Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 13 00:56:24.284381 containerd[1589]: time="2026-03-13T00:56:24.284249707Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 13 00:56:24.284381 containerd[1589]: time="2026-03-13T00:56:24.284343824Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 13 00:56:24.284381 containerd[1589]: time="2026-03-13T00:56:24.284356216Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 13 00:56:24.284381 containerd[1589]: time="2026-03-13T00:56:24.284365794Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 13 00:56:24.285622 containerd[1589]: time="2026-03-13T00:56:24.284953442Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 13 00:56:24.285622 containerd[1589]: time="2026-03-13T00:56:24.285073916Z" level=info msg="Start snapshots syncer" Mar 13 00:56:24.289417 containerd[1589]: time="2026-03-13T00:56:24.288030136Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 13 00:56:24.290069 containerd[1589]: time="2026-03-13T00:56:24.289876272Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 13 00:56:24.290376 containerd[1589]: time="2026-03-13T00:56:24.290089220Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 13 00:56:24.297464 containerd[1589]: time="2026-03-13T00:56:24.296802749Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 13 00:56:24.297464 containerd[1589]: time="2026-03-13T00:56:24.296947780Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 13 00:56:24.297464 containerd[1589]: time="2026-03-13T00:56:24.296969770Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 13 00:56:24.297464 containerd[1589]: time="2026-03-13T00:56:24.296980991Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 13 00:56:24.297464 containerd[1589]: time="2026-03-13T00:56:24.296990790Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 13 00:56:24.297464 containerd[1589]: time="2026-03-13T00:56:24.297053297Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 13 00:56:24.297464 containerd[1589]: time="2026-03-13T00:56:24.297072092Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 13 00:56:24.297464 containerd[1589]: time="2026-03-13T00:56:24.297082932Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 13 00:56:24.297464 containerd[1589]: time="2026-03-13T00:56:24.297143746Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 13 00:56:24.297464 containerd[1589]: time="2026-03-13T00:56:24.297155918Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 13 00:56:24.297464 containerd[1589]: time="2026-03-13T00:56:24.297167470Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 13 00:56:24.297464 containerd[1589]: time="2026-03-13T00:56:24.297427735Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 13 00:56:24.297464 containerd[1589]: time="2026-03-13T00:56:24.297444977Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 13 00:56:24.297464 containerd[1589]: time="2026-03-13T00:56:24.297453423Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 13 00:56:24.297872 containerd[1589]: time="2026-03-13T00:56:24.297462961Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 13 00:56:24.297872 containerd[1589]: time="2026-03-13T00:56:24.297554242Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 13 00:56:24.297872 containerd[1589]: time="2026-03-13T00:56:24.297568989Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 13 00:56:24.297872 containerd[1589]: time="2026-03-13T00:56:24.297587714Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 13 00:56:24.297872 containerd[1589]: time="2026-03-13T00:56:24.297765436Z" level=info msg="runtime interface created" Mar 13 00:56:24.297872 containerd[1589]: time="2026-03-13T00:56:24.297774212Z" level=info msg="created NRI interface" Mar 13 00:56:24.297872 containerd[1589]: time="2026-03-13T00:56:24.297782989Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 13 00:56:24.297872 containerd[1589]: time="2026-03-13T00:56:24.297796103Z" level=info msg="Connect containerd service" Mar 13 00:56:24.297872 containerd[1589]: time="2026-03-13T00:56:24.297851967Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 13 00:56:24.299809 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 13 00:56:24.302005 containerd[1589]: time="2026-03-13T00:56:24.301886540Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 13 00:56:24.374065 systemd[1]: Started sshd@1-10.0.0.147:22-10.0.0.1:34368.service - OpenSSH per-connection server daemon (10.0.0.1:34368). Mar 13 00:56:24.606848 tar[1580]: linux-amd64/README.md Mar 13 00:56:24.642862 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 13 00:56:24.699295 sshd[1681]: Accepted publickey for core from 10.0.0.1 port 34368 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:56:24.700014 sshd-session[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:56:24.708802 systemd-logind[1563]: New session 2 of user core. Mar 13 00:56:24.719833 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 13 00:56:24.928105 sshd[1693]: Connection closed by 10.0.0.1 port 34368 Mar 13 00:56:24.930099 sshd-session[1681]: pam_unix(sshd:session): session closed for user core Mar 13 00:56:24.942364 systemd[1]: sshd@1-10.0.0.147:22-10.0.0.1:34368.service: Deactivated successfully. Mar 13 00:56:24.948951 systemd[1]: session-2.scope: Deactivated successfully. Mar 13 00:56:24.951808 systemd-logind[1563]: Session 2 logged out. Waiting for processes to exit. Mar 13 00:56:24.955910 systemd[1]: Started sshd@2-10.0.0.147:22-10.0.0.1:34376.service - OpenSSH per-connection server daemon (10.0.0.1:34376). Mar 13 00:56:24.963066 systemd-logind[1563]: Removed session 2. Mar 13 00:56:25.028384 containerd[1589]: time="2026-03-13T00:56:25.027990406Z" level=info msg="Start subscribing containerd event" Mar 13 00:56:25.031001 containerd[1589]: time="2026-03-13T00:56:25.028225465Z" level=info msg="Start recovering state" Mar 13 00:56:25.032049 containerd[1589]: time="2026-03-13T00:56:25.029465400Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 13 00:56:25.032410 containerd[1589]: time="2026-03-13T00:56:25.031417084Z" level=info msg="Start event monitor" Mar 13 00:56:25.032758 containerd[1589]: time="2026-03-13T00:56:25.032642010Z" level=info msg="Start cni network conf syncer for default" Mar 13 00:56:25.033147 containerd[1589]: time="2026-03-13T00:56:25.033120935Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 13 00:56:25.033539 containerd[1589]: time="2026-03-13T00:56:25.033354982Z" level=info msg="Start streaming server" Mar 13 00:56:25.033821 containerd[1589]: time="2026-03-13T00:56:25.033801495Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 13 00:56:25.042077 containerd[1589]: time="2026-03-13T00:56:25.038373230Z" level=info msg="runtime interface starting up..." Mar 13 00:56:25.042077 containerd[1589]: time="2026-03-13T00:56:25.041641391Z" level=info msg="starting plugins..." Mar 13 00:56:25.042077 containerd[1589]: time="2026-03-13T00:56:25.041797574Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 13 00:56:25.042350 systemd[1]: Started containerd.service - containerd container runtime. Mar 13 00:56:25.045129 containerd[1589]: time="2026-03-13T00:56:25.045102103Z" level=info msg="containerd successfully booted in 0.943856s" Mar 13 00:56:25.064622 sshd[1704]: Accepted publickey for core from 10.0.0.1 port 34376 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:56:25.066402 sshd-session[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:56:25.073096 systemd-logind[1563]: New session 3 of user core. Mar 13 00:56:25.093760 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 13 00:56:25.120620 sshd[1708]: Connection closed by 10.0.0.1 port 34376 Mar 13 00:56:25.120989 sshd-session[1704]: pam_unix(sshd:session): session closed for user core Mar 13 00:56:25.126233 systemd[1]: sshd@2-10.0.0.147:22-10.0.0.1:34376.service: Deactivated successfully. Mar 13 00:56:25.128379 systemd[1]: session-3.scope: Deactivated successfully. Mar 13 00:56:25.129829 systemd-logind[1563]: Session 3 logged out. Waiting for processes to exit. Mar 13 00:56:25.131966 systemd-logind[1563]: Removed session 3. Mar 13 00:56:27.397084 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:56:27.406097 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 13 00:56:27.406995 (kubelet)[1718]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:56:27.414764 systemd[1]: Startup finished in 13.093s (kernel) + 17.382s (initrd) + 25.870s (userspace) = 56.347s. Mar 13 00:56:29.297874 kubelet[1718]: E0313 00:56:29.297369 1718 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:56:29.301852 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:56:29.302152 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:56:29.302915 systemd[1]: kubelet.service: Consumed 5.436s CPU time, 258.7M memory peak. Mar 13 00:56:35.149054 systemd[1]: Started sshd@3-10.0.0.147:22-10.0.0.1:38296.service - OpenSSH per-connection server daemon (10.0.0.1:38296). Mar 13 00:56:35.346397 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 38296 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:56:35.348876 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:56:35.357980 systemd-logind[1563]: New session 4 of user core. Mar 13 00:56:35.368890 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 13 00:56:35.399797 sshd[1735]: Connection closed by 10.0.0.1 port 38296 Mar 13 00:56:35.401333 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Mar 13 00:56:35.413848 systemd[1]: sshd@3-10.0.0.147:22-10.0.0.1:38296.service: Deactivated successfully. Mar 13 00:56:35.416123 systemd[1]: session-4.scope: Deactivated successfully. Mar 13 00:56:35.417319 systemd-logind[1563]: Session 4 logged out. Waiting for processes to exit. Mar 13 00:56:35.421838 systemd[1]: Started sshd@4-10.0.0.147:22-10.0.0.1:38304.service - OpenSSH per-connection server daemon (10.0.0.1:38304). Mar 13 00:56:35.424776 systemd-logind[1563]: Removed session 4. Mar 13 00:56:35.493176 sshd[1741]: Accepted publickey for core from 10.0.0.1 port 38304 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:56:35.494923 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:56:35.501979 systemd-logind[1563]: New session 5 of user core. Mar 13 00:56:35.511811 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 13 00:56:35.524231 sshd[1744]: Connection closed by 10.0.0.1 port 38304 Mar 13 00:56:35.524841 sshd-session[1741]: pam_unix(sshd:session): session closed for user core Mar 13 00:56:35.537854 systemd[1]: sshd@4-10.0.0.147:22-10.0.0.1:38304.service: Deactivated successfully. Mar 13 00:56:35.540024 systemd[1]: session-5.scope: Deactivated successfully. Mar 13 00:56:35.541608 systemd-logind[1563]: Session 5 logged out. Waiting for processes to exit. Mar 13 00:56:35.544407 systemd[1]: Started sshd@5-10.0.0.147:22-10.0.0.1:38310.service - OpenSSH per-connection server daemon (10.0.0.1:38310). Mar 13 00:56:35.546137 systemd-logind[1563]: Removed session 5. Mar 13 00:56:35.619133 sshd[1750]: Accepted publickey for core from 10.0.0.1 port 38310 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:56:35.620967 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:56:35.628226 systemd-logind[1563]: New session 6 of user core. Mar 13 00:56:35.638844 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 13 00:56:35.659324 sshd[1753]: Connection closed by 10.0.0.1 port 38310 Mar 13 00:56:35.660652 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Mar 13 00:56:35.672657 systemd[1]: sshd@5-10.0.0.147:22-10.0.0.1:38310.service: Deactivated successfully. Mar 13 00:56:35.674901 systemd[1]: session-6.scope: Deactivated successfully. Mar 13 00:56:35.680333 systemd-logind[1563]: Session 6 logged out. Waiting for processes to exit. Mar 13 00:56:35.692273 systemd[1]: Started sshd@6-10.0.0.147:22-10.0.0.1:38314.service - OpenSSH per-connection server daemon (10.0.0.1:38314). Mar 13 00:56:35.693609 systemd-logind[1563]: Removed session 6. Mar 13 00:56:35.822101 sshd[1759]: Accepted publickey for core from 10.0.0.1 port 38314 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:56:35.826878 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:56:35.838646 systemd-logind[1563]: New session 7 of user core. Mar 13 00:56:35.857254 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 13 00:56:35.934155 sudo[1763]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 13 00:56:35.934965 sudo[1763]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:56:35.966875 sudo[1763]: pam_unix(sudo:session): session closed for user root Mar 13 00:56:35.971097 sshd[1762]: Connection closed by 10.0.0.1 port 38314 Mar 13 00:56:35.972971 sshd-session[1759]: pam_unix(sshd:session): session closed for user core Mar 13 00:56:35.989377 systemd[1]: sshd@6-10.0.0.147:22-10.0.0.1:38314.service: Deactivated successfully. Mar 13 00:56:35.992978 systemd[1]: session-7.scope: Deactivated successfully. Mar 13 00:56:35.994785 systemd-logind[1563]: Session 7 logged out. Waiting for processes to exit. Mar 13 00:56:36.000073 systemd[1]: Started sshd@7-10.0.0.147:22-10.0.0.1:38320.service - OpenSSH per-connection server daemon (10.0.0.1:38320). Mar 13 00:56:36.001009 systemd-logind[1563]: Removed session 7. Mar 13 00:56:36.105538 sshd[1769]: Accepted publickey for core from 10.0.0.1 port 38320 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:56:36.107826 sshd-session[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:56:36.116331 systemd-logind[1563]: New session 8 of user core. Mar 13 00:56:36.122787 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 13 00:56:36.153677 sudo[1774]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 13 00:56:36.154199 sudo[1774]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:56:36.164694 sudo[1774]: pam_unix(sudo:session): session closed for user root Mar 13 00:56:36.174965 sudo[1773]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 13 00:56:36.175443 sudo[1773]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:56:36.202011 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 13 00:56:36.294116 augenrules[1796]: No rules Mar 13 00:56:36.296349 systemd[1]: audit-rules.service: Deactivated successfully. Mar 13 00:56:36.296901 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 13 00:56:36.298632 sudo[1773]: pam_unix(sudo:session): session closed for user root Mar 13 00:56:36.300922 sshd[1772]: Connection closed by 10.0.0.1 port 38320 Mar 13 00:56:36.301794 sshd-session[1769]: pam_unix(sshd:session): session closed for user core Mar 13 00:56:36.313767 systemd[1]: sshd@7-10.0.0.147:22-10.0.0.1:38320.service: Deactivated successfully. Mar 13 00:56:36.316349 systemd[1]: session-8.scope: Deactivated successfully. Mar 13 00:56:36.317806 systemd-logind[1563]: Session 8 logged out. Waiting for processes to exit. Mar 13 00:56:36.320443 systemd[1]: Started sshd@8-10.0.0.147:22-10.0.0.1:38336.service - OpenSSH per-connection server daemon (10.0.0.1:38336). Mar 13 00:56:36.322767 systemd-logind[1563]: Removed session 8. Mar 13 00:56:36.394402 sshd[1805]: Accepted publickey for core from 10.0.0.1 port 38336 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:56:36.396596 sshd-session[1805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:56:36.404826 systemd-logind[1563]: New session 9 of user core. Mar 13 00:56:36.415830 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 13 00:56:36.439645 sudo[1809]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 13 00:56:36.440243 sudo[1809]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:56:39.512214 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 13 00:56:39.517161 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:56:40.115949 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 13 00:56:40.150277 (dockerd)[1833]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 13 00:56:42.206849 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:56:42.237797 (kubelet)[1844]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:56:42.747174 dockerd[1833]: time="2026-03-13T00:56:42.746690729Z" level=info msg="Starting up" Mar 13 00:56:42.750569 dockerd[1833]: time="2026-03-13T00:56:42.750277480Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 13 00:56:42.975803 kubelet[1844]: E0313 00:56:42.975107 1844 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:56:42.980708 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:56:42.981169 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:56:42.982003 systemd[1]: kubelet.service: Consumed 2.625s CPU time, 110.7M memory peak. Mar 13 00:56:42.992845 dockerd[1833]: time="2026-03-13T00:56:42.992692517Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 13 00:56:43.086996 systemd[1]: var-lib-docker-metacopy\x2dcheck479938008-merged.mount: Deactivated successfully. Mar 13 00:56:43.131534 dockerd[1833]: time="2026-03-13T00:56:43.131418107Z" level=info msg="Loading containers: start." Mar 13 00:56:43.146585 kernel: Initializing XFRM netlink socket Mar 13 00:56:44.313319 systemd-networkd[1444]: docker0: Link UP Mar 13 00:56:44.321383 dockerd[1833]: time="2026-03-13T00:56:44.321218411Z" level=info msg="Loading containers: done." Mar 13 00:56:44.401466 dockerd[1833]: time="2026-03-13T00:56:44.401350970Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 13 00:56:44.402023 dockerd[1833]: time="2026-03-13T00:56:44.401723515Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 13 00:56:44.402449 dockerd[1833]: time="2026-03-13T00:56:44.402264034Z" level=info msg="Initializing buildkit" Mar 13 00:56:44.475299 dockerd[1833]: time="2026-03-13T00:56:44.475040299Z" level=info msg="Completed buildkit initialization" Mar 13 00:56:44.497301 dockerd[1833]: time="2026-03-13T00:56:44.497169854Z" level=info msg="Daemon has completed initialization" Mar 13 00:56:44.497907 dockerd[1833]: time="2026-03-13T00:56:44.497466655Z" level=info msg="API listen on /run/docker.sock" Mar 13 00:56:44.497938 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 13 00:56:46.547898 containerd[1589]: time="2026-03-13T00:56:46.545913246Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 13 00:56:47.590203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount486904419.mount: Deactivated successfully. Mar 13 00:56:50.095019 containerd[1589]: time="2026-03-13T00:56:50.094685826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:56:50.096079 containerd[1589]: time="2026-03-13T00:56:50.095542212Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27074497" Mar 13 00:56:50.097863 containerd[1589]: time="2026-03-13T00:56:50.097701586Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:56:50.103374 containerd[1589]: time="2026-03-13T00:56:50.103285768Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:56:50.105724 containerd[1589]: time="2026-03-13T00:56:50.105624066Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 3.55939435s" Mar 13 00:56:50.105724 containerd[1589]: time="2026-03-13T00:56:50.105703635Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 13 00:56:50.118211 containerd[1589]: time="2026-03-13T00:56:50.117690980Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 13 00:56:52.996254 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 13 00:56:53.001041 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:56:53.711177 containerd[1589]: time="2026-03-13T00:56:53.711018164Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:56:53.712831 containerd[1589]: time="2026-03-13T00:56:53.712735780Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21165823" Mar 13 00:56:53.714905 containerd[1589]: time="2026-03-13T00:56:53.714678993Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:56:53.718095 containerd[1589]: time="2026-03-13T00:56:53.718047878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:56:53.721564 containerd[1589]: time="2026-03-13T00:56:53.720829531Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 3.603109106s" Mar 13 00:56:53.721564 containerd[1589]: time="2026-03-13T00:56:53.721385259Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 13 00:56:53.724344 containerd[1589]: time="2026-03-13T00:56:53.724280854Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 13 00:56:53.791070 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:56:53.832386 (kubelet)[2137]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:56:54.394021 kubelet[2137]: E0313 00:56:54.393465 2137 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:56:54.399219 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:56:54.399620 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:56:54.400295 systemd[1]: kubelet.service: Consumed 1.346s CPU time, 110.8M memory peak. Mar 13 00:56:56.693865 containerd[1589]: time="2026-03-13T00:56:56.693601804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:56:56.694957 containerd[1589]: time="2026-03-13T00:56:56.694389985Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15729824" Mar 13 00:56:56.696271 containerd[1589]: time="2026-03-13T00:56:56.696193552Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:56:56.699871 containerd[1589]: time="2026-03-13T00:56:56.699625640Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:56:56.700858 containerd[1589]: time="2026-03-13T00:56:56.700827640Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 2.976452001s" Mar 13 00:56:56.701037 containerd[1589]: time="2026-03-13T00:56:56.700956861Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 13 00:56:56.703200 containerd[1589]: time="2026-03-13T00:56:56.703107190Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 13 00:56:59.132946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1590495083.mount: Deactivated successfully. Mar 13 00:57:00.257080 containerd[1589]: time="2026-03-13T00:57:00.256764169Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:57:00.258203 containerd[1589]: time="2026-03-13T00:57:00.257695219Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25861770" Mar 13 00:57:00.259069 containerd[1589]: time="2026-03-13T00:57:00.259010072Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:57:00.261835 containerd[1589]: time="2026-03-13T00:57:00.261630043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:57:00.262250 containerd[1589]: time="2026-03-13T00:57:00.262165735Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 3.558948941s" Mar 13 00:57:00.262289 containerd[1589]: time="2026-03-13T00:57:00.262246486Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 13 00:57:00.266267 containerd[1589]: time="2026-03-13T00:57:00.266217467Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 13 00:57:01.002968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2995069947.mount: Deactivated successfully. Mar 13 00:57:03.449335 containerd[1589]: time="2026-03-13T00:57:03.449147438Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:57:03.450939 containerd[1589]: time="2026-03-13T00:57:03.450649942Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Mar 13 00:57:03.453194 containerd[1589]: time="2026-03-13T00:57:03.453148011Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:57:03.459359 containerd[1589]: time="2026-03-13T00:57:03.459093442Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:57:03.461064 containerd[1589]: time="2026-03-13T00:57:03.460853671Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 3.194601741s" Mar 13 00:57:03.461064 containerd[1589]: time="2026-03-13T00:57:03.460888405Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 13 00:57:03.463609 containerd[1589]: time="2026-03-13T00:57:03.463218529Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 13 00:57:03.904773 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2499142677.mount: Deactivated successfully. Mar 13 00:57:03.913846 containerd[1589]: time="2026-03-13T00:57:03.913686942Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:57:03.915330 containerd[1589]: time="2026-03-13T00:57:03.915057861Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 13 00:57:03.917235 containerd[1589]: time="2026-03-13T00:57:03.917065724Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:57:03.920209 containerd[1589]: time="2026-03-13T00:57:03.920039665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:57:03.921170 containerd[1589]: time="2026-03-13T00:57:03.921044879Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 457.754727ms" Mar 13 00:57:03.921170 containerd[1589]: time="2026-03-13T00:57:03.921125821Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 13 00:57:03.922533 containerd[1589]: time="2026-03-13T00:57:03.922210898Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 13 00:57:04.401782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2686063123.mount: Deactivated successfully. Mar 13 00:57:04.403694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 13 00:57:04.407377 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:57:04.830794 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:57:05.617072 (kubelet)[2234]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:57:06.033370 kubelet[2234]: E0313 00:57:06.033316 2234 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:57:06.037803 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:57:06.038044 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:57:06.038771 systemd[1]: kubelet.service: Consumed 1.539s CPU time, 110.4M memory peak. Mar 13 00:57:07.090067 update_engine[1570]: I20260313 00:57:07.089389 1570 update_attempter.cc:509] Updating boot flags... Mar 13 00:57:07.223609 containerd[1589]: time="2026-03-13T00:57:07.221323344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:57:07.234600 containerd[1589]: time="2026-03-13T00:57:07.231828038Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860674" Mar 13 00:57:07.237810 containerd[1589]: time="2026-03-13T00:57:07.237127182Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:57:07.243211 containerd[1589]: time="2026-03-13T00:57:07.243180416Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 3.320886393s" Mar 13 00:57:07.244654 containerd[1589]: time="2026-03-13T00:57:07.244572136Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 13 00:57:07.246230 containerd[1589]: time="2026-03-13T00:57:07.244862166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:57:11.856306 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:57:11.856928 systemd[1]: kubelet.service: Consumed 1.539s CPU time, 110.4M memory peak. Mar 13 00:57:11.861165 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:57:11.911184 systemd[1]: Reload requested from client PID 2339 ('systemctl') (unit session-9.scope)... Mar 13 00:57:11.911244 systemd[1]: Reloading... Mar 13 00:57:12.026666 zram_generator::config[2385]: No configuration found. Mar 13 00:57:12.262360 systemd[1]: Reloading finished in 350 ms. Mar 13 00:57:12.356165 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 13 00:57:12.356368 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 13 00:57:12.357217 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:57:12.357336 systemd[1]: kubelet.service: Consumed 198ms CPU time, 98.3M memory peak. Mar 13 00:57:12.361318 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:57:12.609963 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:57:12.625155 (kubelet)[2431]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 13 00:57:12.743223 kubelet[2431]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 13 00:57:12.743223 kubelet[2431]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:57:12.743853 kubelet[2431]: I0313 00:57:12.743610 2431 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 00:57:12.931215 kubelet[2431]: I0313 00:57:12.930901 2431 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 13 00:57:12.931215 kubelet[2431]: I0313 00:57:12.930973 2431 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 00:57:12.931215 kubelet[2431]: I0313 00:57:12.931166 2431 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 13 00:57:12.931215 kubelet[2431]: I0313 00:57:12.931213 2431 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 13 00:57:12.932229 kubelet[2431]: I0313 00:57:12.932013 2431 server.go:956] "Client rotation is on, will bootstrap in background" Mar 13 00:57:12.997822 kubelet[2431]: E0313 00:57:12.997563 2431 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.147:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 13 00:57:12.997969 kubelet[2431]: I0313 00:57:12.997891 2431 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 13 00:57:13.007338 kubelet[2431]: I0313 00:57:13.007232 2431 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 00:57:13.024964 kubelet[2431]: I0313 00:57:13.023448 2431 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 13 00:57:13.026989 kubelet[2431]: I0313 00:57:13.026631 2431 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 00:57:13.028322 kubelet[2431]: I0313 00:57:13.027140 2431 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 00:57:13.028629 kubelet[2431]: I0313 00:57:13.028436 2431 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 00:57:13.028629 kubelet[2431]: I0313 00:57:13.028458 2431 container_manager_linux.go:306] "Creating device plugin manager" Mar 13 00:57:13.028984 kubelet[2431]: I0313 00:57:13.028825 2431 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 13 00:57:13.032324 kubelet[2431]: I0313 00:57:13.032179 2431 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:57:13.034201 kubelet[2431]: I0313 00:57:13.034092 2431 kubelet.go:475] "Attempting to sync node with API server" Mar 13 00:57:13.034269 kubelet[2431]: I0313 00:57:13.034253 2431 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 00:57:13.034568 kubelet[2431]: I0313 00:57:13.034450 2431 kubelet.go:387] "Adding apiserver pod source" Mar 13 00:57:13.034801 kubelet[2431]: I0313 00:57:13.034674 2431 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 00:57:13.037203 kubelet[2431]: E0313 00:57:13.037030 2431 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 13 00:57:13.038654 kubelet[2431]: E0313 00:57:13.038630 2431 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.147:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 13 00:57:13.041610 kubelet[2431]: I0313 00:57:13.041397 2431 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 13 00:57:13.044363 kubelet[2431]: I0313 00:57:13.044245 2431 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 13 00:57:13.044945 kubelet[2431]: I0313 00:57:13.044836 2431 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 13 00:57:13.045854 kubelet[2431]: W0313 00:57:13.045724 2431 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 13 00:57:13.056219 kubelet[2431]: I0313 00:57:13.056171 2431 server.go:1262] "Started kubelet" Mar 13 00:57:13.056743 kubelet[2431]: I0313 00:57:13.056639 2431 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 00:57:13.057269 kubelet[2431]: I0313 00:57:13.056776 2431 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 00:57:13.057836 kubelet[2431]: I0313 00:57:13.057316 2431 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 13 00:57:13.060115 kubelet[2431]: I0313 00:57:13.060005 2431 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 00:57:13.064568 kubelet[2431]: I0313 00:57:13.064409 2431 server.go:310] "Adding debug handlers to kubelet server" Mar 13 00:57:13.068840 kubelet[2431]: E0313 00:57:13.065378 2431 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.147:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.147:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189c40a58cf75de7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-13 00:57:13.055935975 +0000 UTC m=+0.410643485,LastTimestamp:2026-03-13 00:57:13.055935975 +0000 UTC m=+0.410643485,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 13 00:57:13.069803 kubelet[2431]: I0313 00:57:13.068890 2431 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 00:57:13.069803 kubelet[2431]: I0313 00:57:13.068931 2431 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 13 00:57:13.072334 kubelet[2431]: E0313 00:57:13.072240 2431 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 13 00:57:13.072923 kubelet[2431]: I0313 00:57:13.072864 2431 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 13 00:57:13.074090 kubelet[2431]: I0313 00:57:13.074018 2431 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 13 00:57:13.074394 kubelet[2431]: I0313 00:57:13.074318 2431 reconciler.go:29] "Reconciler: start to sync state" Mar 13 00:57:13.074853 kubelet[2431]: E0313 00:57:13.074675 2431 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.147:6443: connect: connection refused" interval="200ms" Mar 13 00:57:13.074853 kubelet[2431]: E0313 00:57:13.074771 2431 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 13 00:57:13.075354 kubelet[2431]: E0313 00:57:13.075272 2431 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 13 00:57:13.075632 kubelet[2431]: I0313 00:57:13.075556 2431 factory.go:223] Registration of the systemd container factory successfully Mar 13 00:57:13.075874 kubelet[2431]: I0313 00:57:13.075857 2431 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 13 00:57:13.079675 kubelet[2431]: I0313 00:57:13.079452 2431 factory.go:223] Registration of the containerd container factory successfully Mar 13 00:57:13.131308 kubelet[2431]: I0313 00:57:13.131112 2431 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 13 00:57:13.131308 kubelet[2431]: I0313 00:57:13.131134 2431 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 13 00:57:13.131308 kubelet[2431]: I0313 00:57:13.131184 2431 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:57:13.136400 kubelet[2431]: I0313 00:57:13.136323 2431 policy_none.go:49] "None policy: Start" Mar 13 00:57:13.136601 kubelet[2431]: I0313 00:57:13.136433 2431 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 13 00:57:13.136601 kubelet[2431]: I0313 00:57:13.136552 2431 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 13 00:57:13.140162 kubelet[2431]: I0313 00:57:13.140109 2431 policy_none.go:47] "Start" Mar 13 00:57:13.146106 kubelet[2431]: I0313 00:57:13.145594 2431 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 13 00:57:13.150017 kubelet[2431]: I0313 00:57:13.149892 2431 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 13 00:57:13.150331 kubelet[2431]: I0313 00:57:13.150254 2431 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 13 00:57:13.150611 kubelet[2431]: I0313 00:57:13.150540 2431 kubelet.go:2428] "Starting kubelet main sync loop" Mar 13 00:57:13.150663 kubelet[2431]: E0313 00:57:13.150618 2431 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 00:57:13.154398 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 13 00:57:13.157008 kubelet[2431]: E0313 00:57:13.156834 2431 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 13 00:57:13.168927 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 13 00:57:13.174340 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 13 00:57:13.174903 kubelet[2431]: E0313 00:57:13.174885 2431 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 13 00:57:13.194222 kubelet[2431]: E0313 00:57:13.194155 2431 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 13 00:57:13.194789 kubelet[2431]: I0313 00:57:13.194767 2431 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 00:57:13.195196 kubelet[2431]: I0313 00:57:13.194922 2431 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 00:57:13.196881 kubelet[2431]: I0313 00:57:13.196087 2431 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 00:57:13.199451 kubelet[2431]: E0313 00:57:13.199415 2431 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 13 00:57:13.199910 kubelet[2431]: E0313 00:57:13.199734 2431 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 13 00:57:13.272238 systemd[1]: Created slice kubepods-burstable-pod91a26ba1b5cfd987d36cc88039bf81c9.slice - libcontainer container kubepods-burstable-pod91a26ba1b5cfd987d36cc88039bf81c9.slice. Mar 13 00:57:13.276819 kubelet[2431]: E0313 00:57:13.276786 2431 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.147:6443: connect: connection refused" interval="400ms" Mar 13 00:57:13.283562 kubelet[2431]: E0313 00:57:13.283220 2431 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:57:13.289152 systemd[1]: Created slice kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice - libcontainer container kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice. Mar 13 00:57:13.298072 kubelet[2431]: I0313 00:57:13.297865 2431 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 13 00:57:13.298625 kubelet[2431]: E0313 00:57:13.298435 2431 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:57:13.298625 kubelet[2431]: E0313 00:57:13.298588 2431 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.147:6443/api/v1/nodes\": dial tcp 10.0.0.147:6443: connect: connection refused" node="localhost" Mar 13 00:57:13.303204 systemd[1]: Created slice kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice - libcontainer container kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice. Mar 13 00:57:13.307177 kubelet[2431]: E0313 00:57:13.306350 2431 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:57:13.376369 kubelet[2431]: I0313 00:57:13.376222 2431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:57:13.376369 kubelet[2431]: I0313 00:57:13.376290 2431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/91a26ba1b5cfd987d36cc88039bf81c9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"91a26ba1b5cfd987d36cc88039bf81c9\") " pod="kube-system/kube-apiserver-localhost" Mar 13 00:57:13.376369 kubelet[2431]: I0313 00:57:13.376316 2431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:57:13.376369 kubelet[2431]: I0313 00:57:13.376332 2431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:57:13.376369 kubelet[2431]: I0313 00:57:13.376347 2431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:57:13.376654 kubelet[2431]: I0313 00:57:13.376362 2431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 13 00:57:13.376654 kubelet[2431]: I0313 00:57:13.376374 2431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/91a26ba1b5cfd987d36cc88039bf81c9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"91a26ba1b5cfd987d36cc88039bf81c9\") " pod="kube-system/kube-apiserver-localhost" Mar 13 00:57:13.376654 kubelet[2431]: I0313 00:57:13.376387 2431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/91a26ba1b5cfd987d36cc88039bf81c9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"91a26ba1b5cfd987d36cc88039bf81c9\") " pod="kube-system/kube-apiserver-localhost" Mar 13 00:57:13.376654 kubelet[2431]: I0313 00:57:13.376400 2431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:57:13.501590 kubelet[2431]: I0313 00:57:13.501423 2431 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 13 00:57:13.502229 kubelet[2431]: E0313 00:57:13.502105 2431 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.147:6443/api/v1/nodes\": dial tcp 10.0.0.147:6443: connect: connection refused" node="localhost" Mar 13 00:57:13.590205 containerd[1589]: time="2026-03-13T00:57:13.590064327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:91a26ba1b5cfd987d36cc88039bf81c9,Namespace:kube-system,Attempt:0,}" Mar 13 00:57:13.602626 containerd[1589]: time="2026-03-13T00:57:13.602423858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,}" Mar 13 00:57:13.611234 containerd[1589]: time="2026-03-13T00:57:13.610978123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,}" Mar 13 00:57:13.678546 kubelet[2431]: E0313 00:57:13.678227 2431 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.147:6443: connect: connection refused" interval="800ms" Mar 13 00:57:13.906466 kubelet[2431]: I0313 00:57:13.906188 2431 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 13 00:57:13.906466 kubelet[2431]: E0313 00:57:13.906663 2431 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.147:6443/api/v1/nodes\": dial tcp 10.0.0.147:6443: connect: connection refused" node="localhost" Mar 13 00:57:14.033120 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1873648814.mount: Deactivated successfully. Mar 13 00:57:14.041950 containerd[1589]: time="2026-03-13T00:57:14.041822680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:57:14.046268 containerd[1589]: time="2026-03-13T00:57:14.046133322Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 13 00:57:14.048823 containerd[1589]: time="2026-03-13T00:57:14.048666061Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:57:14.051747 containerd[1589]: time="2026-03-13T00:57:14.051654856Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:57:14.053015 containerd[1589]: time="2026-03-13T00:57:14.052926866Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:57:14.054308 containerd[1589]: time="2026-03-13T00:57:14.054235963Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 13 00:57:14.055829 containerd[1589]: time="2026-03-13T00:57:14.055595784Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 13 00:57:14.058141 containerd[1589]: time="2026-03-13T00:57:14.058018974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:57:14.058651 containerd[1589]: time="2026-03-13T00:57:14.058459476Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 464.425342ms" Mar 13 00:57:14.060145 containerd[1589]: time="2026-03-13T00:57:14.060064254Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 453.955866ms" Mar 13 00:57:14.064200 containerd[1589]: time="2026-03-13T00:57:14.064120474Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 451.441348ms" Mar 13 00:57:14.136457 containerd[1589]: time="2026-03-13T00:57:14.136244196Z" level=info msg="connecting to shim 4c5a2cf83d459961c75d54e1c9adc5d504bb03febe67ac63bf9d4f0fb5ce43a8" address="unix:///run/containerd/s/c8b4c33c87686774d57029dbda600d2b11ee398273e7a8f6d0cd8fc3696807db" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:57:14.151644 containerd[1589]: time="2026-03-13T00:57:14.151414999Z" level=info msg="connecting to shim db6c864439d8a9d3d0d7bb3325fe7e69f2cb0ba503678494518582e536622d68" address="unix:///run/containerd/s/56492fc2c787eecf34d0133aac8b2799c798ab12299681fd72862b5b6ce737d6" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:57:14.178629 containerd[1589]: time="2026-03-13T00:57:14.178373358Z" level=info msg="connecting to shim bc19d0e79d67ea27d5dd30f2755418e0d8fee0fae1670c56b023eded157e9f2e" address="unix:///run/containerd/s/bced1600f758299cb9e00e235d2c3fda3a6d98d1d4e4c7fdb761b85e561a2043" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:57:14.249889 systemd[1]: Started cri-containerd-db6c864439d8a9d3d0d7bb3325fe7e69f2cb0ba503678494518582e536622d68.scope - libcontainer container db6c864439d8a9d3d0d7bb3325fe7e69f2cb0ba503678494518582e536622d68. Mar 13 00:57:14.300843 systemd[1]: Started cri-containerd-bc19d0e79d67ea27d5dd30f2755418e0d8fee0fae1670c56b023eded157e9f2e.scope - libcontainer container bc19d0e79d67ea27d5dd30f2755418e0d8fee0fae1670c56b023eded157e9f2e. Mar 13 00:57:14.309675 systemd[1]: Started cri-containerd-4c5a2cf83d459961c75d54e1c9adc5d504bb03febe67ac63bf9d4f0fb5ce43a8.scope - libcontainer container 4c5a2cf83d459961c75d54e1c9adc5d504bb03febe67ac63bf9d4f0fb5ce43a8. Mar 13 00:57:14.350599 kubelet[2431]: E0313 00:57:14.350052 2431 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.147:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 13 00:57:14.408732 kubelet[2431]: E0313 00:57:14.408600 2431 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 13 00:57:14.414374 kubelet[2431]: E0313 00:57:14.414217 2431 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 13 00:57:14.473632 containerd[1589]: time="2026-03-13T00:57:14.473295058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c5a2cf83d459961c75d54e1c9adc5d504bb03febe67ac63bf9d4f0fb5ce43a8\"" Mar 13 00:57:14.483191 kubelet[2431]: E0313 00:57:14.483036 2431 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.147:6443: connect: connection refused" interval="1.6s" Mar 13 00:57:14.490044 containerd[1589]: time="2026-03-13T00:57:14.489861147Z" level=info msg="CreateContainer within sandbox \"4c5a2cf83d459961c75d54e1c9adc5d504bb03febe67ac63bf9d4f0fb5ce43a8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 13 00:57:14.502885 containerd[1589]: time="2026-03-13T00:57:14.502755614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc19d0e79d67ea27d5dd30f2755418e0d8fee0fae1670c56b023eded157e9f2e\"" Mar 13 00:57:14.503879 containerd[1589]: time="2026-03-13T00:57:14.503807536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:91a26ba1b5cfd987d36cc88039bf81c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"db6c864439d8a9d3d0d7bb3325fe7e69f2cb0ba503678494518582e536622d68\"" Mar 13 00:57:14.513434 containerd[1589]: time="2026-03-13T00:57:14.513323366Z" level=info msg="CreateContainer within sandbox \"bc19d0e79d67ea27d5dd30f2755418e0d8fee0fae1670c56b023eded157e9f2e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 13 00:57:14.515607 containerd[1589]: time="2026-03-13T00:57:14.514335023Z" level=info msg="Container e4aae050297124ffb5a51bdd8ae28dfe1b738cc8305e4e231a0ec6f5b2fad079: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:57:14.516564 containerd[1589]: time="2026-03-13T00:57:14.516418483Z" level=info msg="CreateContainer within sandbox \"db6c864439d8a9d3d0d7bb3325fe7e69f2cb0ba503678494518582e536622d68\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 13 00:57:14.528419 containerd[1589]: time="2026-03-13T00:57:14.528390684Z" level=info msg="CreateContainer within sandbox \"4c5a2cf83d459961c75d54e1c9adc5d504bb03febe67ac63bf9d4f0fb5ce43a8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e4aae050297124ffb5a51bdd8ae28dfe1b738cc8305e4e231a0ec6f5b2fad079\"" Mar 13 00:57:14.530369 containerd[1589]: time="2026-03-13T00:57:14.530344424Z" level=info msg="StartContainer for \"e4aae050297124ffb5a51bdd8ae28dfe1b738cc8305e4e231a0ec6f5b2fad079\"" Mar 13 00:57:14.534977 containerd[1589]: time="2026-03-13T00:57:14.534899660Z" level=info msg="connecting to shim e4aae050297124ffb5a51bdd8ae28dfe1b738cc8305e4e231a0ec6f5b2fad079" address="unix:///run/containerd/s/c8b4c33c87686774d57029dbda600d2b11ee398273e7a8f6d0cd8fc3696807db" protocol=ttrpc version=3 Mar 13 00:57:14.538387 kubelet[2431]: E0313 00:57:14.538064 2431 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 13 00:57:14.540187 containerd[1589]: time="2026-03-13T00:57:14.539457747Z" level=info msg="Container 930394b85e26b864517dfa08401fca725d918e8472c7b54d0e64d89140c559f0: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:57:14.544328 containerd[1589]: time="2026-03-13T00:57:14.544270512Z" level=info msg="Container 87281fa78d2521d7ca07661d116018c316cd045e0ee73f17fab523c16d73b17e: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:57:14.552375 containerd[1589]: time="2026-03-13T00:57:14.552185811Z" level=info msg="CreateContainer within sandbox \"bc19d0e79d67ea27d5dd30f2755418e0d8fee0fae1670c56b023eded157e9f2e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"930394b85e26b864517dfa08401fca725d918e8472c7b54d0e64d89140c559f0\"" Mar 13 00:57:14.553209 containerd[1589]: time="2026-03-13T00:57:14.553083098Z" level=info msg="StartContainer for \"930394b85e26b864517dfa08401fca725d918e8472c7b54d0e64d89140c559f0\"" Mar 13 00:57:14.555202 containerd[1589]: time="2026-03-13T00:57:14.555114814Z" level=info msg="connecting to shim 930394b85e26b864517dfa08401fca725d918e8472c7b54d0e64d89140c559f0" address="unix:///run/containerd/s/bced1600f758299cb9e00e235d2c3fda3a6d98d1d4e4c7fdb761b85e561a2043" protocol=ttrpc version=3 Mar 13 00:57:14.557617 containerd[1589]: time="2026-03-13T00:57:14.557229861Z" level=info msg="CreateContainer within sandbox \"db6c864439d8a9d3d0d7bb3325fe7e69f2cb0ba503678494518582e536622d68\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"87281fa78d2521d7ca07661d116018c316cd045e0ee73f17fab523c16d73b17e\"" Mar 13 00:57:14.560010 containerd[1589]: time="2026-03-13T00:57:14.559980199Z" level=info msg="StartContainer for \"87281fa78d2521d7ca07661d116018c316cd045e0ee73f17fab523c16d73b17e\"" Mar 13 00:57:14.561654 containerd[1589]: time="2026-03-13T00:57:14.561412298Z" level=info msg="connecting to shim 87281fa78d2521d7ca07661d116018c316cd045e0ee73f17fab523c16d73b17e" address="unix:///run/containerd/s/56492fc2c787eecf34d0133aac8b2799c798ab12299681fd72862b5b6ce737d6" protocol=ttrpc version=3 Mar 13 00:57:14.571864 systemd[1]: Started cri-containerd-e4aae050297124ffb5a51bdd8ae28dfe1b738cc8305e4e231a0ec6f5b2fad079.scope - libcontainer container e4aae050297124ffb5a51bdd8ae28dfe1b738cc8305e4e231a0ec6f5b2fad079. Mar 13 00:57:14.601677 systemd[1]: Started cri-containerd-87281fa78d2521d7ca07661d116018c316cd045e0ee73f17fab523c16d73b17e.scope - libcontainer container 87281fa78d2521d7ca07661d116018c316cd045e0ee73f17fab523c16d73b17e. Mar 13 00:57:14.618733 systemd[1]: Started cri-containerd-930394b85e26b864517dfa08401fca725d918e8472c7b54d0e64d89140c559f0.scope - libcontainer container 930394b85e26b864517dfa08401fca725d918e8472c7b54d0e64d89140c559f0. Mar 13 00:57:14.710975 kubelet[2431]: I0313 00:57:14.710895 2431 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 13 00:57:14.711802 kubelet[2431]: E0313 00:57:14.711618 2431 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.147:6443/api/v1/nodes\": dial tcp 10.0.0.147:6443: connect: connection refused" node="localhost" Mar 13 00:57:14.725774 containerd[1589]: time="2026-03-13T00:57:14.725141312Z" level=info msg="StartContainer for \"e4aae050297124ffb5a51bdd8ae28dfe1b738cc8305e4e231a0ec6f5b2fad079\" returns successfully" Mar 13 00:57:14.730742 containerd[1589]: time="2026-03-13T00:57:14.730435306Z" level=info msg="StartContainer for \"87281fa78d2521d7ca07661d116018c316cd045e0ee73f17fab523c16d73b17e\" returns successfully" Mar 13 00:57:14.786952 containerd[1589]: time="2026-03-13T00:57:14.786767134Z" level=info msg="StartContainer for \"930394b85e26b864517dfa08401fca725d918e8472c7b54d0e64d89140c559f0\" returns successfully" Mar 13 00:57:15.300024 kubelet[2431]: E0313 00:57:15.299824 2431 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:57:15.360105 kubelet[2431]: E0313 00:57:15.359876 2431 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:57:15.513803 kubelet[2431]: E0313 00:57:15.513411 2431 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:57:16.321867 kubelet[2431]: I0313 00:57:16.321467 2431 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 13 00:57:16.446573 kubelet[2431]: E0313 00:57:16.446195 2431 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:57:16.447302 kubelet[2431]: E0313 00:57:16.446812 2431 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:57:17.466110 kubelet[2431]: E0313 00:57:17.465005 2431 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:57:17.469004 kubelet[2431]: E0313 00:57:17.468952 2431 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:57:20.493038 kubelet[2431]: E0313 00:57:20.492797 2431 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 13 00:57:20.568453 kubelet[2431]: I0313 00:57:20.568169 2431 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 13 00:57:20.568453 kubelet[2431]: E0313 00:57:20.568275 2431 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 13 00:57:20.574255 kubelet[2431]: I0313 00:57:20.574221 2431 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 13 00:57:20.605438 kubelet[2431]: E0313 00:57:20.605303 2431 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.189c40a58cf75de7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-13 00:57:13.055935975 +0000 UTC m=+0.410643485,LastTimestamp:2026-03-13 00:57:13.055935975 +0000 UTC m=+0.410643485,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 13 00:57:20.652565 kubelet[2431]: E0313 00:57:20.652363 2431 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 13 00:57:20.652565 kubelet[2431]: I0313 00:57:20.652458 2431 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 13 00:57:20.657942 kubelet[2431]: E0313 00:57:20.657654 2431 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 13 00:57:20.657942 kubelet[2431]: I0313 00:57:20.657682 2431 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 13 00:57:20.662245 kubelet[2431]: E0313 00:57:20.662215 2431 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 13 00:57:21.333139 kubelet[2431]: I0313 00:57:21.331953 2431 apiserver.go:52] "Watching apiserver" Mar 13 00:57:21.374375 kubelet[2431]: I0313 00:57:21.374153 2431 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 13 00:57:23.110909 systemd[1]: Reload requested from client PID 2721 ('systemctl') (unit session-9.scope)... Mar 13 00:57:23.111000 systemd[1]: Reloading... Mar 13 00:57:23.258654 zram_generator::config[2761]: No configuration found. Mar 13 00:57:23.479685 kubelet[2431]: I0313 00:57:23.478293 2431 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 13 00:57:23.581966 systemd[1]: Reloading finished in 470 ms. Mar 13 00:57:23.633338 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:57:23.655361 systemd[1]: kubelet.service: Deactivated successfully. Mar 13 00:57:23.655964 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:57:23.656091 systemd[1]: kubelet.service: Consumed 2.155s CPU time, 128.3M memory peak. Mar 13 00:57:23.659046 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:57:23.936918 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:57:23.953055 (kubelet)[2809]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 13 00:57:24.071849 kubelet[2809]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 13 00:57:24.071849 kubelet[2809]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:57:24.072274 kubelet[2809]: I0313 00:57:24.071921 2809 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 00:57:24.083108 kubelet[2809]: I0313 00:57:24.083022 2809 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 13 00:57:24.083108 kubelet[2809]: I0313 00:57:24.083085 2809 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 00:57:24.083108 kubelet[2809]: I0313 00:57:24.083111 2809 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 13 00:57:24.083108 kubelet[2809]: I0313 00:57:24.083123 2809 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 13 00:57:24.083426 kubelet[2809]: I0313 00:57:24.083367 2809 server.go:956] "Client rotation is on, will bootstrap in background" Mar 13 00:57:24.085446 kubelet[2809]: I0313 00:57:24.085329 2809 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 13 00:57:24.096955 kubelet[2809]: I0313 00:57:24.095803 2809 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 13 00:57:24.113443 kubelet[2809]: I0313 00:57:24.113321 2809 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 00:57:24.127146 kubelet[2809]: I0313 00:57:24.127120 2809 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 13 00:57:24.128003 kubelet[2809]: I0313 00:57:24.127817 2809 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 00:57:24.128269 kubelet[2809]: I0313 00:57:24.128077 2809 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 00:57:24.128644 kubelet[2809]: I0313 00:57:24.128443 2809 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 00:57:24.128644 kubelet[2809]: I0313 00:57:24.128624 2809 container_manager_linux.go:306] "Creating device plugin manager" Mar 13 00:57:24.128841 kubelet[2809]: I0313 00:57:24.128653 2809 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 13 00:57:24.129329 kubelet[2809]: I0313 00:57:24.129061 2809 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:57:24.129329 kubelet[2809]: I0313 00:57:24.129271 2809 kubelet.go:475] "Attempting to sync node with API server" Mar 13 00:57:24.129329 kubelet[2809]: I0313 00:57:24.129283 2809 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 00:57:24.129329 kubelet[2809]: I0313 00:57:24.129303 2809 kubelet.go:387] "Adding apiserver pod source" Mar 13 00:57:24.129329 kubelet[2809]: I0313 00:57:24.129317 2809 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 00:57:24.131837 kubelet[2809]: I0313 00:57:24.131612 2809 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 13 00:57:24.132846 kubelet[2809]: I0313 00:57:24.132681 2809 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 13 00:57:24.132846 kubelet[2809]: I0313 00:57:24.132809 2809 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 13 00:57:24.141417 kubelet[2809]: I0313 00:57:24.141187 2809 server.go:1262] "Started kubelet" Mar 13 00:57:24.152102 kubelet[2809]: I0313 00:57:24.152076 2809 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 00:57:24.152657 kubelet[2809]: I0313 00:57:24.152382 2809 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 13 00:57:24.157967 kubelet[2809]: I0313 00:57:24.157621 2809 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 00:57:24.165340 kubelet[2809]: I0313 00:57:24.165109 2809 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 00:57:24.179127 kubelet[2809]: I0313 00:57:24.179109 2809 server.go:310] "Adding debug handlers to kubelet server" Mar 13 00:57:24.184460 kubelet[2809]: I0313 00:57:24.184440 2809 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 00:57:24.190683 kubelet[2809]: I0313 00:57:24.189058 2809 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 13 00:57:24.192744 kubelet[2809]: I0313 00:57:24.191843 2809 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 13 00:57:24.192744 kubelet[2809]: E0313 00:57:24.192212 2809 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 13 00:57:24.193176 kubelet[2809]: I0313 00:57:24.193047 2809 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 13 00:57:24.194845 kubelet[2809]: I0313 00:57:24.193814 2809 reconciler.go:29] "Reconciler: start to sync state" Mar 13 00:57:24.216417 kubelet[2809]: E0313 00:57:24.216392 2809 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 13 00:57:24.224914 kubelet[2809]: I0313 00:57:24.224773 2809 factory.go:223] Registration of the containerd container factory successfully Mar 13 00:57:24.224914 kubelet[2809]: I0313 00:57:24.224836 2809 factory.go:223] Registration of the systemd container factory successfully Mar 13 00:57:24.224914 kubelet[2809]: I0313 00:57:24.224901 2809 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 13 00:57:24.247317 kubelet[2809]: I0313 00:57:24.247023 2809 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 13 00:57:24.256146 kubelet[2809]: I0313 00:57:24.255273 2809 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 13 00:57:24.256146 kubelet[2809]: I0313 00:57:24.256048 2809 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 13 00:57:24.256146 kubelet[2809]: I0313 00:57:24.256078 2809 kubelet.go:2428] "Starting kubelet main sync loop" Mar 13 00:57:24.256146 kubelet[2809]: E0313 00:57:24.256118 2809 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 00:57:24.330097 kubelet[2809]: I0313 00:57:24.329021 2809 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 13 00:57:24.330097 kubelet[2809]: I0313 00:57:24.329046 2809 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 13 00:57:24.330097 kubelet[2809]: I0313 00:57:24.329073 2809 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:57:24.330097 kubelet[2809]: I0313 00:57:24.329261 2809 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 13 00:57:24.330459 kubelet[2809]: I0313 00:57:24.329272 2809 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 13 00:57:24.330459 kubelet[2809]: I0313 00:57:24.330288 2809 policy_none.go:49] "None policy: Start" Mar 13 00:57:24.330459 kubelet[2809]: I0313 00:57:24.330305 2809 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 13 00:57:24.330459 kubelet[2809]: I0313 00:57:24.330320 2809 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 13 00:57:24.331988 kubelet[2809]: I0313 00:57:24.331819 2809 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 13 00:57:24.331988 kubelet[2809]: I0313 00:57:24.331884 2809 policy_none.go:47] "Start" Mar 13 00:57:24.346005 kubelet[2809]: E0313 00:57:24.345981 2809 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 13 00:57:24.346354 kubelet[2809]: I0313 00:57:24.346147 2809 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 00:57:24.346354 kubelet[2809]: I0313 00:57:24.346158 2809 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 00:57:24.347375 kubelet[2809]: I0313 00:57:24.347224 2809 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 00:57:24.352884 kubelet[2809]: E0313 00:57:24.352274 2809 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 13 00:57:24.358103 kubelet[2809]: I0313 00:57:24.357888 2809 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 13 00:57:24.360332 kubelet[2809]: I0313 00:57:24.360228 2809 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 13 00:57:24.363028 kubelet[2809]: I0313 00:57:24.362930 2809 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 13 00:57:24.383346 kubelet[2809]: E0313 00:57:24.383211 2809 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 13 00:57:24.396887 kubelet[2809]: I0313 00:57:24.396416 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:57:24.396887 kubelet[2809]: I0313 00:57:24.396817 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:57:24.398959 kubelet[2809]: I0313 00:57:24.397052 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:57:24.398959 kubelet[2809]: I0313 00:57:24.397652 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:57:24.400811 kubelet[2809]: I0313 00:57:24.399082 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/91a26ba1b5cfd987d36cc88039bf81c9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"91a26ba1b5cfd987d36cc88039bf81c9\") " pod="kube-system/kube-apiserver-localhost" Mar 13 00:57:24.403649 kubelet[2809]: I0313 00:57:24.400911 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 13 00:57:24.403649 kubelet[2809]: I0313 00:57:24.402087 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/91a26ba1b5cfd987d36cc88039bf81c9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"91a26ba1b5cfd987d36cc88039bf81c9\") " pod="kube-system/kube-apiserver-localhost" Mar 13 00:57:24.403649 kubelet[2809]: I0313 00:57:24.402967 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/91a26ba1b5cfd987d36cc88039bf81c9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"91a26ba1b5cfd987d36cc88039bf81c9\") " pod="kube-system/kube-apiserver-localhost" Mar 13 00:57:24.403649 kubelet[2809]: I0313 00:57:24.403300 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:57:24.465885 kubelet[2809]: I0313 00:57:24.465445 2809 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 13 00:57:24.477599 kubelet[2809]: I0313 00:57:24.477423 2809 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 13 00:57:24.477794 kubelet[2809]: I0313 00:57:24.477616 2809 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 13 00:57:25.130792 kubelet[2809]: I0313 00:57:25.130605 2809 apiserver.go:52] "Watching apiserver" Mar 13 00:57:25.195858 kubelet[2809]: I0313 00:57:25.195763 2809 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 13 00:57:25.212177 kubelet[2809]: I0313 00:57:25.212060 2809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.211921268 podStartE2EDuration="1.211921268s" podCreationTimestamp="2026-03-13 00:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:57:25.198835167 +0000 UTC m=+1.235230067" watchObservedRunningTime="2026-03-13 00:57:25.211921268 +0000 UTC m=+1.248316168" Mar 13 00:57:25.212177 kubelet[2809]: I0313 00:57:25.212142 2809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.212137021 podStartE2EDuration="2.212137021s" podCreationTimestamp="2026-03-13 00:57:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:57:25.211804144 +0000 UTC m=+1.248199054" watchObservedRunningTime="2026-03-13 00:57:25.212137021 +0000 UTC m=+1.248531921" Mar 13 00:57:25.223598 kubelet[2809]: I0313 00:57:25.223393 2809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.223383753 podStartE2EDuration="1.223383753s" podCreationTimestamp="2026-03-13 00:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:57:25.22267755 +0000 UTC m=+1.259072450" watchObservedRunningTime="2026-03-13 00:57:25.223383753 +0000 UTC m=+1.259778652" Mar 13 00:57:25.294644 kubelet[2809]: I0313 00:57:25.294355 2809 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 13 00:57:25.304404 kubelet[2809]: E0313 00:57:25.304299 2809 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 13 00:57:27.619421 kubelet[2809]: I0313 00:57:27.619172 2809 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 13 00:57:27.620033 containerd[1589]: time="2026-03-13T00:57:27.619876869Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 13 00:57:27.621306 kubelet[2809]: I0313 00:57:27.621236 2809 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 13 00:57:28.453343 systemd[1]: Created slice kubepods-besteffort-poda89cc925_1b0f_480f_b5dd_440dfa2753a6.slice - libcontainer container kubepods-besteffort-poda89cc925_1b0f_480f_b5dd_440dfa2753a6.slice. Mar 13 00:57:28.539612 kubelet[2809]: I0313 00:57:28.539338 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a89cc925-1b0f-480f-b5dd-440dfa2753a6-kube-proxy\") pod \"kube-proxy-d2dzg\" (UID: \"a89cc925-1b0f-480f-b5dd-440dfa2753a6\") " pod="kube-system/kube-proxy-d2dzg" Mar 13 00:57:28.539612 kubelet[2809]: I0313 00:57:28.539606 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a89cc925-1b0f-480f-b5dd-440dfa2753a6-xtables-lock\") pod \"kube-proxy-d2dzg\" (UID: \"a89cc925-1b0f-480f-b5dd-440dfa2753a6\") " pod="kube-system/kube-proxy-d2dzg" Mar 13 00:57:28.539979 kubelet[2809]: I0313 00:57:28.539640 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a89cc925-1b0f-480f-b5dd-440dfa2753a6-lib-modules\") pod \"kube-proxy-d2dzg\" (UID: \"a89cc925-1b0f-480f-b5dd-440dfa2753a6\") " pod="kube-system/kube-proxy-d2dzg" Mar 13 00:57:28.539979 kubelet[2809]: I0313 00:57:28.539665 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhbrt\" (UniqueName: \"kubernetes.io/projected/a89cc925-1b0f-480f-b5dd-440dfa2753a6-kube-api-access-nhbrt\") pod \"kube-proxy-d2dzg\" (UID: \"a89cc925-1b0f-480f-b5dd-440dfa2753a6\") " pod="kube-system/kube-proxy-d2dzg" Mar 13 00:57:28.791671 containerd[1589]: time="2026-03-13T00:57:28.791421370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d2dzg,Uid:a89cc925-1b0f-480f-b5dd-440dfa2753a6,Namespace:kube-system,Attempt:0,}" Mar 13 00:57:28.884838 systemd[1]: Created slice kubepods-besteffort-pod07a91351_c6e5_4e53_b6b3_3c679aa9edad.slice - libcontainer container kubepods-besteffort-pod07a91351_c6e5_4e53_b6b3_3c679aa9edad.slice. Mar 13 00:57:28.898913 containerd[1589]: time="2026-03-13T00:57:28.898790895Z" level=info msg="connecting to shim e05b417fc3803430070160f351bc1b0b469fbe47fd7e72e98fa14e7e9494482d" address="unix:///run/containerd/s/92c37347596e0c36be9194dc9aabb8f5aef6ed752be401f7592aa287c5d2553a" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:57:28.986367 kubelet[2809]: I0313 00:57:28.986075 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc664\" (UniqueName: \"kubernetes.io/projected/07a91351-c6e5-4e53-b6b3-3c679aa9edad-kube-api-access-pc664\") pod \"tigera-operator-5588576f44-tpjgw\" (UID: \"07a91351-c6e5-4e53-b6b3-3c679aa9edad\") " pod="tigera-operator/tigera-operator-5588576f44-tpjgw" Mar 13 00:57:28.987230 kubelet[2809]: I0313 00:57:28.987130 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/07a91351-c6e5-4e53-b6b3-3c679aa9edad-var-lib-calico\") pod \"tigera-operator-5588576f44-tpjgw\" (UID: \"07a91351-c6e5-4e53-b6b3-3c679aa9edad\") " pod="tigera-operator/tigera-operator-5588576f44-tpjgw" Mar 13 00:57:29.077825 systemd[1]: Started cri-containerd-e05b417fc3803430070160f351bc1b0b469fbe47fd7e72e98fa14e7e9494482d.scope - libcontainer container e05b417fc3803430070160f351bc1b0b469fbe47fd7e72e98fa14e7e9494482d. Mar 13 00:57:29.197333 containerd[1589]: time="2026-03-13T00:57:29.196979958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-tpjgw,Uid:07a91351-c6e5-4e53-b6b3-3c679aa9edad,Namespace:tigera-operator,Attempt:0,}" Mar 13 00:57:29.234974 containerd[1589]: time="2026-03-13T00:57:29.234848717Z" level=info msg="connecting to shim 91143f44c58dc0b0e46208b458a45af66b869b1ddbced12a0d6696e8f0f5beaf" address="unix:///run/containerd/s/7fdaf0b7b0809481442901e2463b62822d1cf24138a361f815906b6430986d78" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:57:29.292424 containerd[1589]: time="2026-03-13T00:57:29.292330844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d2dzg,Uid:a89cc925-1b0f-480f-b5dd-440dfa2753a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"e05b417fc3803430070160f351bc1b0b469fbe47fd7e72e98fa14e7e9494482d\"" Mar 13 00:57:29.304628 containerd[1589]: time="2026-03-13T00:57:29.304385037Z" level=info msg="CreateContainer within sandbox \"e05b417fc3803430070160f351bc1b0b469fbe47fd7e72e98fa14e7e9494482d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 13 00:57:29.332207 containerd[1589]: time="2026-03-13T00:57:29.330829957Z" level=info msg="Container 6226bfa8ccc844d0307647e3dca6aaafb5d616abce34356a2cbfefc37520b7ae: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:57:29.346211 systemd[1]: Started cri-containerd-91143f44c58dc0b0e46208b458a45af66b869b1ddbced12a0d6696e8f0f5beaf.scope - libcontainer container 91143f44c58dc0b0e46208b458a45af66b869b1ddbced12a0d6696e8f0f5beaf. Mar 13 00:57:29.349386 containerd[1589]: time="2026-03-13T00:57:29.349210194Z" level=info msg="CreateContainer within sandbox \"e05b417fc3803430070160f351bc1b0b469fbe47fd7e72e98fa14e7e9494482d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6226bfa8ccc844d0307647e3dca6aaafb5d616abce34356a2cbfefc37520b7ae\"" Mar 13 00:57:29.351348 containerd[1589]: time="2026-03-13T00:57:29.351246651Z" level=info msg="StartContainer for \"6226bfa8ccc844d0307647e3dca6aaafb5d616abce34356a2cbfefc37520b7ae\"" Mar 13 00:57:29.352944 containerd[1589]: time="2026-03-13T00:57:29.352840965Z" level=info msg="connecting to shim 6226bfa8ccc844d0307647e3dca6aaafb5d616abce34356a2cbfefc37520b7ae" address="unix:///run/containerd/s/92c37347596e0c36be9194dc9aabb8f5aef6ed752be401f7592aa287c5d2553a" protocol=ttrpc version=3 Mar 13 00:57:29.401652 systemd[1]: Started cri-containerd-6226bfa8ccc844d0307647e3dca6aaafb5d616abce34356a2cbfefc37520b7ae.scope - libcontainer container 6226bfa8ccc844d0307647e3dca6aaafb5d616abce34356a2cbfefc37520b7ae. Mar 13 00:57:29.578403 containerd[1589]: time="2026-03-13T00:57:29.578235347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-tpjgw,Uid:07a91351-c6e5-4e53-b6b3-3c679aa9edad,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"91143f44c58dc0b0e46208b458a45af66b869b1ddbced12a0d6696e8f0f5beaf\"" Mar 13 00:57:29.581357 containerd[1589]: time="2026-03-13T00:57:29.581081627Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 13 00:57:29.613074 containerd[1589]: time="2026-03-13T00:57:29.612937106Z" level=info msg="StartContainer for \"6226bfa8ccc844d0307647e3dca6aaafb5d616abce34356a2cbfefc37520b7ae\" returns successfully" Mar 13 00:57:30.730223 kubelet[2809]: I0313 00:57:30.729410 2809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-d2dzg" podStartSLOduration=2.72934954 podStartE2EDuration="2.72934954s" podCreationTimestamp="2026-03-13 00:57:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:57:30.7079565 +0000 UTC m=+6.744351410" watchObservedRunningTime="2026-03-13 00:57:30.72934954 +0000 UTC m=+6.765744441" Mar 13 00:57:31.200024 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1247982710.mount: Deactivated successfully. Mar 13 00:57:35.201389 containerd[1589]: time="2026-03-13T00:57:35.201170450Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:57:35.203671 containerd[1589]: time="2026-03-13T00:57:35.203167155Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 13 00:57:35.205278 containerd[1589]: time="2026-03-13T00:57:35.205206474Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:57:35.209922 containerd[1589]: time="2026-03-13T00:57:35.209838595Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:57:35.211042 containerd[1589]: time="2026-03-13T00:57:35.210850117Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 5.629730199s" Mar 13 00:57:35.211042 containerd[1589]: time="2026-03-13T00:57:35.210976161Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 13 00:57:35.220861 containerd[1589]: time="2026-03-13T00:57:35.220652023Z" level=info msg="CreateContainer within sandbox \"91143f44c58dc0b0e46208b458a45af66b869b1ddbced12a0d6696e8f0f5beaf\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 13 00:57:35.238066 containerd[1589]: time="2026-03-13T00:57:35.237381114Z" level=info msg="Container babdc7402398df8b3eed838098b745fb35893441d62ecc8a9bc91a2b2b6e9206: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:57:35.257372 containerd[1589]: time="2026-03-13T00:57:35.257234662Z" level=info msg="CreateContainer within sandbox \"91143f44c58dc0b0e46208b458a45af66b869b1ddbced12a0d6696e8f0f5beaf\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"babdc7402398df8b3eed838098b745fb35893441d62ecc8a9bc91a2b2b6e9206\"" Mar 13 00:57:35.259265 containerd[1589]: time="2026-03-13T00:57:35.259173039Z" level=info msg="StartContainer for \"babdc7402398df8b3eed838098b745fb35893441d62ecc8a9bc91a2b2b6e9206\"" Mar 13 00:57:35.261176 containerd[1589]: time="2026-03-13T00:57:35.260891195Z" level=info msg="connecting to shim babdc7402398df8b3eed838098b745fb35893441d62ecc8a9bc91a2b2b6e9206" address="unix:///run/containerd/s/7fdaf0b7b0809481442901e2463b62822d1cf24138a361f815906b6430986d78" protocol=ttrpc version=3 Mar 13 00:57:35.310009 systemd[1]: Started cri-containerd-babdc7402398df8b3eed838098b745fb35893441d62ecc8a9bc91a2b2b6e9206.scope - libcontainer container babdc7402398df8b3eed838098b745fb35893441d62ecc8a9bc91a2b2b6e9206. Mar 13 00:57:35.389938 containerd[1589]: time="2026-03-13T00:57:35.389828245Z" level=info msg="StartContainer for \"babdc7402398df8b3eed838098b745fb35893441d62ecc8a9bc91a2b2b6e9206\" returns successfully" Mar 13 00:57:35.999632 kubelet[2809]: I0313 00:57:35.999380 2809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-tpjgw" podStartSLOduration=2.367477597 podStartE2EDuration="7.999363548s" podCreationTimestamp="2026-03-13 00:57:28 +0000 UTC" firstStartedPulling="2026-03-13 00:57:29.58076085 +0000 UTC m=+5.617155749" lastFinishedPulling="2026-03-13 00:57:35.212646799 +0000 UTC m=+11.249041700" observedRunningTime="2026-03-13 00:57:35.997919854 +0000 UTC m=+12.034314753" watchObservedRunningTime="2026-03-13 00:57:35.999363548 +0000 UTC m=+12.035758458" Mar 13 00:57:40.522290 sudo[1809]: pam_unix(sudo:session): session closed for user root Mar 13 00:57:40.527233 sshd[1808]: Connection closed by 10.0.0.1 port 38336 Mar 13 00:57:40.530672 sshd-session[1805]: pam_unix(sshd:session): session closed for user core Mar 13 00:57:40.541727 systemd-logind[1563]: Session 9 logged out. Waiting for processes to exit. Mar 13 00:57:40.547693 systemd[1]: sshd@8-10.0.0.147:22-10.0.0.1:38336.service: Deactivated successfully. Mar 13 00:57:40.555186 systemd[1]: session-9.scope: Deactivated successfully. Mar 13 00:57:40.556294 systemd[1]: session-9.scope: Consumed 12.701s CPU time, 232.9M memory peak. Mar 13 00:57:40.563396 systemd-logind[1563]: Removed session 9. Mar 13 00:57:47.801273 systemd[1]: Created slice kubepods-besteffort-pod72af3405_dbab_4260_af77_478d33a26190.slice - libcontainer container kubepods-besteffort-pod72af3405_dbab_4260_af77_478d33a26190.slice. Mar 13 00:57:47.960903 kubelet[2809]: I0313 00:57:47.960708 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72af3405-dbab-4260-af77-478d33a26190-tigera-ca-bundle\") pod \"calico-typha-69f997f8b9-qs5n2\" (UID: \"72af3405-dbab-4260-af77-478d33a26190\") " pod="calico-system/calico-typha-69f997f8b9-qs5n2" Mar 13 00:57:47.960903 kubelet[2809]: I0313 00:57:47.960745 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/72af3405-dbab-4260-af77-478d33a26190-typha-certs\") pod \"calico-typha-69f997f8b9-qs5n2\" (UID: \"72af3405-dbab-4260-af77-478d33a26190\") " pod="calico-system/calico-typha-69f997f8b9-qs5n2" Mar 13 00:57:47.960903 kubelet[2809]: I0313 00:57:47.960831 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6hsh\" (UniqueName: \"kubernetes.io/projected/72af3405-dbab-4260-af77-478d33a26190-kube-api-access-j6hsh\") pod \"calico-typha-69f997f8b9-qs5n2\" (UID: \"72af3405-dbab-4260-af77-478d33a26190\") " pod="calico-system/calico-typha-69f997f8b9-qs5n2" Mar 13 00:57:47.993273 systemd[1]: Created slice kubepods-besteffort-podfc791912_aeae_4392_b236_c23b57d5e534.slice - libcontainer container kubepods-besteffort-podfc791912_aeae_4392_b236_c23b57d5e534.slice. Mar 13 00:57:48.064117 kubelet[2809]: I0313 00:57:48.063705 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fc791912-aeae-4392-b236-c23b57d5e534-cni-net-dir\") pod \"calico-node-7mdcm\" (UID: \"fc791912-aeae-4392-b236-c23b57d5e534\") " pod="calico-system/calico-node-7mdcm" Mar 13 00:57:48.064117 kubelet[2809]: I0313 00:57:48.063867 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fc791912-aeae-4392-b236-c23b57d5e534-flexvol-driver-host\") pod \"calico-node-7mdcm\" (UID: \"fc791912-aeae-4392-b236-c23b57d5e534\") " pod="calico-system/calico-node-7mdcm" Mar 13 00:57:48.064117 kubelet[2809]: I0313 00:57:48.063888 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc791912-aeae-4392-b236-c23b57d5e534-tigera-ca-bundle\") pod \"calico-node-7mdcm\" (UID: \"fc791912-aeae-4392-b236-c23b57d5e534\") " pod="calico-system/calico-node-7mdcm" Mar 13 00:57:48.064117 kubelet[2809]: I0313 00:57:48.063903 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fc791912-aeae-4392-b236-c23b57d5e534-var-lib-calico\") pod \"calico-node-7mdcm\" (UID: \"fc791912-aeae-4392-b236-c23b57d5e534\") " pod="calico-system/calico-node-7mdcm" Mar 13 00:57:48.064117 kubelet[2809]: I0313 00:57:48.063952 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fc791912-aeae-4392-b236-c23b57d5e534-policysync\") pod \"calico-node-7mdcm\" (UID: \"fc791912-aeae-4392-b236-c23b57d5e534\") " pod="calico-system/calico-node-7mdcm" Mar 13 00:57:48.064460 kubelet[2809]: I0313 00:57:48.063966 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/fc791912-aeae-4392-b236-c23b57d5e534-bpffs\") pod \"calico-node-7mdcm\" (UID: \"fc791912-aeae-4392-b236-c23b57d5e534\") " pod="calico-system/calico-node-7mdcm" Mar 13 00:57:48.064460 kubelet[2809]: I0313 00:57:48.063978 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fc791912-aeae-4392-b236-c23b57d5e534-cni-bin-dir\") pod \"calico-node-7mdcm\" (UID: \"fc791912-aeae-4392-b236-c23b57d5e534\") " pod="calico-system/calico-node-7mdcm" Mar 13 00:57:48.064460 kubelet[2809]: I0313 00:57:48.064022 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/fc791912-aeae-4392-b236-c23b57d5e534-nodeproc\") pod \"calico-node-7mdcm\" (UID: \"fc791912-aeae-4392-b236-c23b57d5e534\") " pod="calico-system/calico-node-7mdcm" Mar 13 00:57:48.064460 kubelet[2809]: I0313 00:57:48.064061 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/fc791912-aeae-4392-b236-c23b57d5e534-sys-fs\") pod \"calico-node-7mdcm\" (UID: \"fc791912-aeae-4392-b236-c23b57d5e534\") " pod="calico-system/calico-node-7mdcm" Mar 13 00:57:48.064460 kubelet[2809]: I0313 00:57:48.064089 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fc791912-aeae-4392-b236-c23b57d5e534-node-certs\") pod \"calico-node-7mdcm\" (UID: \"fc791912-aeae-4392-b236-c23b57d5e534\") " pod="calico-system/calico-node-7mdcm" Mar 13 00:57:48.064460 kubelet[2809]: I0313 00:57:48.064109 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fc791912-aeae-4392-b236-c23b57d5e534-cni-log-dir\") pod \"calico-node-7mdcm\" (UID: \"fc791912-aeae-4392-b236-c23b57d5e534\") " pod="calico-system/calico-node-7mdcm" Mar 13 00:57:48.065103 kubelet[2809]: I0313 00:57:48.064131 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc791912-aeae-4392-b236-c23b57d5e534-lib-modules\") pod \"calico-node-7mdcm\" (UID: \"fc791912-aeae-4392-b236-c23b57d5e534\") " pod="calico-system/calico-node-7mdcm" Mar 13 00:57:48.109139 kubelet[2809]: E0313 00:57:48.108465 2809 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f5h27" podUID="ff8f62bf-6443-4de0-9c1a-1a7823facea3" Mar 13 00:57:48.125853 containerd[1589]: time="2026-03-13T00:57:48.125195501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69f997f8b9-qs5n2,Uid:72af3405-dbab-4260-af77-478d33a26190,Namespace:calico-system,Attempt:0,}" Mar 13 00:57:48.168631 kubelet[2809]: I0313 00:57:48.164875 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz6nh\" (UniqueName: \"kubernetes.io/projected/fc791912-aeae-4392-b236-c23b57d5e534-kube-api-access-cz6nh\") pod \"calico-node-7mdcm\" (UID: \"fc791912-aeae-4392-b236-c23b57d5e534\") " pod="calico-system/calico-node-7mdcm" Mar 13 00:57:48.168631 kubelet[2809]: I0313 00:57:48.164960 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fc791912-aeae-4392-b236-c23b57d5e534-var-run-calico\") pod \"calico-node-7mdcm\" (UID: \"fc791912-aeae-4392-b236-c23b57d5e534\") " pod="calico-system/calico-node-7mdcm" Mar 13 00:57:48.168631 kubelet[2809]: I0313 00:57:48.164979 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc791912-aeae-4392-b236-c23b57d5e534-xtables-lock\") pod \"calico-node-7mdcm\" (UID: \"fc791912-aeae-4392-b236-c23b57d5e534\") " pod="calico-system/calico-node-7mdcm" Mar 13 00:57:48.177637 kubelet[2809]: E0313 00:57:48.170989 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.177637 kubelet[2809]: W0313 00:57:48.171337 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.182325 kubelet[2809]: E0313 00:57:48.179861 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.187620 kubelet[2809]: E0313 00:57:48.183370 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.187620 kubelet[2809]: W0313 00:57:48.183894 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.187620 kubelet[2809]: E0313 00:57:48.183914 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.187620 kubelet[2809]: E0313 00:57:48.185743 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.187620 kubelet[2809]: W0313 00:57:48.185910 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.187620 kubelet[2809]: E0313 00:57:48.185927 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.187620 kubelet[2809]: E0313 00:57:48.187150 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.187620 kubelet[2809]: W0313 00:57:48.187161 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.187620 kubelet[2809]: E0313 00:57:48.187322 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.197024 kubelet[2809]: E0313 00:57:48.196668 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.197024 kubelet[2809]: W0313 00:57:48.196745 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.197024 kubelet[2809]: E0313 00:57:48.196831 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.255964 containerd[1589]: time="2026-03-13T00:57:48.255440152Z" level=info msg="connecting to shim 47f8e7174c11fd4858ded70d4a46d2d69afc6763d96321cc9e97532d3b89d3a7" address="unix:///run/containerd/s/581f192413c32e637e8e915b64f5b4ab0ddcad440b94304c47e1f3a70817ff35" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:57:48.270044 kubelet[2809]: E0313 00:57:48.269396 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.270261 kubelet[2809]: W0313 00:57:48.270224 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.270675 kubelet[2809]: E0313 00:57:48.270419 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.272606 kubelet[2809]: I0313 00:57:48.272292 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ff8f62bf-6443-4de0-9c1a-1a7823facea3-varrun\") pod \"csi-node-driver-f5h27\" (UID: \"ff8f62bf-6443-4de0-9c1a-1a7823facea3\") " pod="calico-system/csi-node-driver-f5h27" Mar 13 00:57:48.273879 kubelet[2809]: E0313 00:57:48.273702 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.273879 kubelet[2809]: W0313 00:57:48.273721 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.273879 kubelet[2809]: E0313 00:57:48.273735 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.276451 kubelet[2809]: E0313 00:57:48.276014 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.276451 kubelet[2809]: W0313 00:57:48.276031 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.276451 kubelet[2809]: E0313 00:57:48.276045 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.276451 kubelet[2809]: E0313 00:57:48.276381 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.276451 kubelet[2809]: W0313 00:57:48.276390 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.276451 kubelet[2809]: E0313 00:57:48.276400 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.280137 kubelet[2809]: E0313 00:57:48.278985 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.280137 kubelet[2809]: W0313 00:57:48.279003 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.280137 kubelet[2809]: E0313 00:57:48.279015 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.280886 kubelet[2809]: E0313 00:57:48.280865 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.280886 kubelet[2809]: W0313 00:57:48.280881 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.280960 kubelet[2809]: E0313 00:57:48.280892 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.285007 kubelet[2809]: E0313 00:57:48.284048 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.285007 kubelet[2809]: W0313 00:57:48.284176 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.285007 kubelet[2809]: E0313 00:57:48.284214 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.286423 kubelet[2809]: E0313 00:57:48.286403 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.286423 kubelet[2809]: W0313 00:57:48.286420 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.286677 kubelet[2809]: E0313 00:57:48.286442 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.288099 kubelet[2809]: E0313 00:57:48.287930 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.289426 kubelet[2809]: W0313 00:57:48.288416 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.290405 kubelet[2809]: E0313 00:57:48.290384 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.292647 kubelet[2809]: E0313 00:57:48.292302 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.292647 kubelet[2809]: W0313 00:57:48.292398 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.292647 kubelet[2809]: E0313 00:57:48.292411 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.293081 kubelet[2809]: E0313 00:57:48.293060 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.293161 kubelet[2809]: W0313 00:57:48.293142 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.293241 kubelet[2809]: E0313 00:57:48.293224 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.309616 kubelet[2809]: E0313 00:57:48.307982 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.311156 kubelet[2809]: W0313 00:57:48.309943 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.311156 kubelet[2809]: E0313 00:57:48.309975 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.311264 kubelet[2809]: I0313 00:57:48.311199 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff8f62bf-6443-4de0-9c1a-1a7823facea3-kubelet-dir\") pod \"csi-node-driver-f5h27\" (UID: \"ff8f62bf-6443-4de0-9c1a-1a7823facea3\") " pod="calico-system/csi-node-driver-f5h27" Mar 13 00:57:48.312145 kubelet[2809]: E0313 00:57:48.312128 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.312254 kubelet[2809]: W0313 00:57:48.312230 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.312349 kubelet[2809]: E0313 00:57:48.312334 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.313241 kubelet[2809]: E0313 00:57:48.312989 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.313241 kubelet[2809]: W0313 00:57:48.313036 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.313241 kubelet[2809]: E0313 00:57:48.313050 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.314452 kubelet[2809]: E0313 00:57:48.314316 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.314666 kubelet[2809]: W0313 00:57:48.314647 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.314833 kubelet[2809]: E0313 00:57:48.314741 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.315299 kubelet[2809]: E0313 00:57:48.315198 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.315299 kubelet[2809]: W0313 00:57:48.315211 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.315299 kubelet[2809]: E0313 00:57:48.315221 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.315726 kubelet[2809]: I0313 00:57:48.315617 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ff8f62bf-6443-4de0-9c1a-1a7823facea3-socket-dir\") pod \"csi-node-driver-f5h27\" (UID: \"ff8f62bf-6443-4de0-9c1a-1a7823facea3\") " pod="calico-system/csi-node-driver-f5h27" Mar 13 00:57:48.316177 kubelet[2809]: E0313 00:57:48.315850 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.316177 kubelet[2809]: W0313 00:57:48.315921 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.316177 kubelet[2809]: E0313 00:57:48.315940 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.318984 kubelet[2809]: E0313 00:57:48.318381 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.318984 kubelet[2809]: W0313 00:57:48.318659 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.318984 kubelet[2809]: E0313 00:57:48.318677 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.320223 kubelet[2809]: E0313 00:57:48.320115 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.320223 kubelet[2809]: W0313 00:57:48.320192 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.320223 kubelet[2809]: E0313 00:57:48.320206 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.321265 kubelet[2809]: E0313 00:57:48.321176 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.321265 kubelet[2809]: W0313 00:57:48.321249 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.321265 kubelet[2809]: E0313 00:57:48.321262 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.322411 kubelet[2809]: E0313 00:57:48.322314 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.322411 kubelet[2809]: W0313 00:57:48.322395 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.322411 kubelet[2809]: E0313 00:57:48.322407 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.323355 kubelet[2809]: E0313 00:57:48.323254 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.323355 kubelet[2809]: W0313 00:57:48.323329 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.323355 kubelet[2809]: E0313 00:57:48.323341 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.324180 kubelet[2809]: E0313 00:57:48.324102 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.324180 kubelet[2809]: W0313 00:57:48.324170 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.324248 kubelet[2809]: E0313 00:57:48.324183 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.325440 kubelet[2809]: E0313 00:57:48.325275 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.325440 kubelet[2809]: W0313 00:57:48.325356 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.325440 kubelet[2809]: E0313 00:57:48.325368 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.326141 kubelet[2809]: E0313 00:57:48.326044 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.326141 kubelet[2809]: W0313 00:57:48.326123 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.326141 kubelet[2809]: E0313 00:57:48.326135 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.326611 kubelet[2809]: I0313 00:57:48.326371 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ff8f62bf-6443-4de0-9c1a-1a7823facea3-registration-dir\") pod \"csi-node-driver-f5h27\" (UID: \"ff8f62bf-6443-4de0-9c1a-1a7823facea3\") " pod="calico-system/csi-node-driver-f5h27" Mar 13 00:57:48.328385 kubelet[2809]: E0313 00:57:48.328293 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.328385 kubelet[2809]: W0313 00:57:48.328367 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.328385 kubelet[2809]: E0313 00:57:48.328381 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.329419 kubelet[2809]: E0313 00:57:48.329308 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.329419 kubelet[2809]: W0313 00:57:48.329387 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.329419 kubelet[2809]: E0313 00:57:48.329399 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.329909 kubelet[2809]: I0313 00:57:48.329864 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fxg2\" (UniqueName: \"kubernetes.io/projected/ff8f62bf-6443-4de0-9c1a-1a7823facea3-kube-api-access-9fxg2\") pod \"csi-node-driver-f5h27\" (UID: \"ff8f62bf-6443-4de0-9c1a-1a7823facea3\") " pod="calico-system/csi-node-driver-f5h27" Mar 13 00:57:48.330287 kubelet[2809]: E0313 00:57:48.330192 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.330287 kubelet[2809]: W0313 00:57:48.330267 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.330287 kubelet[2809]: E0313 00:57:48.330278 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.331633 kubelet[2809]: E0313 00:57:48.331236 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.331633 kubelet[2809]: W0313 00:57:48.331253 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.331633 kubelet[2809]: E0313 00:57:48.331263 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.332129 kubelet[2809]: E0313 00:57:48.331991 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.332129 kubelet[2809]: W0313 00:57:48.332007 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.332129 kubelet[2809]: E0313 00:57:48.332017 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.363348 kubelet[2809]: E0313 00:57:48.362736 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.363348 kubelet[2809]: W0313 00:57:48.362964 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.363348 kubelet[2809]: E0313 00:57:48.362994 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.398144 systemd[1]: Started cri-containerd-47f8e7174c11fd4858ded70d4a46d2d69afc6763d96321cc9e97532d3b89d3a7.scope - libcontainer container 47f8e7174c11fd4858ded70d4a46d2d69afc6763d96321cc9e97532d3b89d3a7. Mar 13 00:57:48.433066 kubelet[2809]: E0313 00:57:48.432938 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.433066 kubelet[2809]: W0313 00:57:48.433046 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.433066 kubelet[2809]: E0313 00:57:48.433079 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.434047 kubelet[2809]: E0313 00:57:48.433970 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.434047 kubelet[2809]: W0313 00:57:48.433996 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.434047 kubelet[2809]: E0313 00:57:48.434019 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.435563 kubelet[2809]: E0313 00:57:48.434964 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.435563 kubelet[2809]: W0313 00:57:48.435049 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.435563 kubelet[2809]: E0313 00:57:48.435071 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.436635 kubelet[2809]: E0313 00:57:48.435899 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.436635 kubelet[2809]: W0313 00:57:48.435919 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.436635 kubelet[2809]: E0313 00:57:48.435935 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.438895 kubelet[2809]: E0313 00:57:48.438709 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.438895 kubelet[2809]: W0313 00:57:48.438730 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.438895 kubelet[2809]: E0313 00:57:48.438745 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.440033 kubelet[2809]: E0313 00:57:48.439899 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.440033 kubelet[2809]: W0313 00:57:48.439919 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.440033 kubelet[2809]: E0313 00:57:48.439942 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.441664 kubelet[2809]: E0313 00:57:48.441628 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.441974 kubelet[2809]: W0313 00:57:48.441876 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.442918 kubelet[2809]: E0313 00:57:48.442321 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.445450 kubelet[2809]: E0313 00:57:48.445299 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.445450 kubelet[2809]: W0313 00:57:48.445367 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.445450 kubelet[2809]: E0313 00:57:48.445383 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.446058 kubelet[2809]: E0313 00:57:48.445984 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.446058 kubelet[2809]: W0313 00:57:48.445995 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.446058 kubelet[2809]: E0313 00:57:48.446006 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.447374 kubelet[2809]: E0313 00:57:48.447142 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.447374 kubelet[2809]: W0313 00:57:48.447155 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.447374 kubelet[2809]: E0313 00:57:48.447166 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.448936 kubelet[2809]: E0313 00:57:48.448717 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.448936 kubelet[2809]: W0313 00:57:48.448853 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.448936 kubelet[2809]: E0313 00:57:48.448872 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.450375 kubelet[2809]: E0313 00:57:48.450297 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.450375 kubelet[2809]: W0313 00:57:48.450363 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.450375 kubelet[2809]: E0313 00:57:48.450374 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.451468 kubelet[2809]: E0313 00:57:48.451390 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.451468 kubelet[2809]: W0313 00:57:48.451459 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.451896 kubelet[2809]: E0313 00:57:48.451612 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.452886 kubelet[2809]: E0313 00:57:48.452388 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.452886 kubelet[2809]: W0313 00:57:48.452841 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.452886 kubelet[2809]: E0313 00:57:48.452863 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.454958 kubelet[2809]: E0313 00:57:48.454848 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.454958 kubelet[2809]: W0313 00:57:48.454935 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.454958 kubelet[2809]: E0313 00:57:48.454952 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.458031 kubelet[2809]: E0313 00:57:48.456876 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.458031 kubelet[2809]: W0313 00:57:48.456893 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.458031 kubelet[2809]: E0313 00:57:48.456908 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.462395 kubelet[2809]: E0313 00:57:48.462043 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.462395 kubelet[2809]: W0313 00:57:48.462150 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.462395 kubelet[2809]: E0313 00:57:48.462171 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.464449 kubelet[2809]: E0313 00:57:48.464384 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.464449 kubelet[2809]: W0313 00:57:48.464402 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.464449 kubelet[2809]: E0313 00:57:48.464416 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.465753 kubelet[2809]: E0313 00:57:48.465361 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.465753 kubelet[2809]: W0313 00:57:48.465450 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.465753 kubelet[2809]: E0313 00:57:48.465467 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.466935 kubelet[2809]: E0313 00:57:48.466750 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.466935 kubelet[2809]: W0313 00:57:48.466857 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.466935 kubelet[2809]: E0313 00:57:48.466872 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.468249 kubelet[2809]: E0313 00:57:48.468167 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.468249 kubelet[2809]: W0313 00:57:48.468249 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.468361 kubelet[2809]: E0313 00:57:48.468265 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.469308 kubelet[2809]: E0313 00:57:48.469199 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.469308 kubelet[2809]: W0313 00:57:48.469291 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.469308 kubelet[2809]: E0313 00:57:48.469306 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.470467 kubelet[2809]: E0313 00:57:48.470291 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.470467 kubelet[2809]: W0313 00:57:48.470381 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.470467 kubelet[2809]: E0313 00:57:48.470400 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.471684 kubelet[2809]: E0313 00:57:48.471327 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.471684 kubelet[2809]: W0313 00:57:48.471419 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.471684 kubelet[2809]: E0313 00:57:48.471435 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.472314 kubelet[2809]: E0313 00:57:48.472206 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.472314 kubelet[2809]: W0313 00:57:48.472297 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.472314 kubelet[2809]: E0313 00:57:48.472313 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.503807 kubelet[2809]: E0313 00:57:48.502463 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:48.503807 kubelet[2809]: W0313 00:57:48.503452 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:48.504226 kubelet[2809]: E0313 00:57:48.503653 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:48.536970 containerd[1589]: time="2026-03-13T00:57:48.536396707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69f997f8b9-qs5n2,Uid:72af3405-dbab-4260-af77-478d33a26190,Namespace:calico-system,Attempt:0,} returns sandbox id \"47f8e7174c11fd4858ded70d4a46d2d69afc6763d96321cc9e97532d3b89d3a7\"" Mar 13 00:57:48.544106 containerd[1589]: time="2026-03-13T00:57:48.543924140Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 13 00:57:48.606172 containerd[1589]: time="2026-03-13T00:57:48.606027005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7mdcm,Uid:fc791912-aeae-4392-b236-c23b57d5e534,Namespace:calico-system,Attempt:0,}" Mar 13 00:57:48.691705 containerd[1589]: time="2026-03-13T00:57:48.691656845Z" level=info msg="connecting to shim 5645dd89237b8ca18cb1bc432a0ecdeb784dfe2fa9abda5da65993a82d72a15c" address="unix:///run/containerd/s/9aba69cbdd2a232339cbadcf2a9f60c2523017e704c2ddf64e93f202c38f059e" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:57:48.777173 systemd[1]: Started cri-containerd-5645dd89237b8ca18cb1bc432a0ecdeb784dfe2fa9abda5da65993a82d72a15c.scope - libcontainer container 5645dd89237b8ca18cb1bc432a0ecdeb784dfe2fa9abda5da65993a82d72a15c. Mar 13 00:57:48.903641 containerd[1589]: time="2026-03-13T00:57:48.902257228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7mdcm,Uid:fc791912-aeae-4392-b236-c23b57d5e534,Namespace:calico-system,Attempt:0,} returns sandbox id \"5645dd89237b8ca18cb1bc432a0ecdeb784dfe2fa9abda5da65993a82d72a15c\"" Mar 13 00:57:49.271101 kubelet[2809]: E0313 00:57:49.270691 2809 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f5h27" podUID="ff8f62bf-6443-4de0-9c1a-1a7823facea3" Mar 13 00:57:49.394397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount928009556.mount: Deactivated successfully. Mar 13 00:57:51.260621 kubelet[2809]: E0313 00:57:51.259991 2809 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f5h27" podUID="ff8f62bf-6443-4de0-9c1a-1a7823facea3" Mar 13 00:57:52.081168 containerd[1589]: time="2026-03-13T00:57:52.080660992Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:57:52.083288 containerd[1589]: time="2026-03-13T00:57:52.083111451Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 13 00:57:52.086284 containerd[1589]: time="2026-03-13T00:57:52.086024606Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:57:52.091954 containerd[1589]: time="2026-03-13T00:57:52.091729424Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:57:52.093123 containerd[1589]: time="2026-03-13T00:57:52.092941535Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 3.548967613s" Mar 13 00:57:52.093123 containerd[1589]: time="2026-03-13T00:57:52.093053734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 13 00:57:52.095913 containerd[1589]: time="2026-03-13T00:57:52.095704745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 13 00:57:52.136751 containerd[1589]: time="2026-03-13T00:57:52.136379449Z" level=info msg="CreateContainer within sandbox \"47f8e7174c11fd4858ded70d4a46d2d69afc6763d96321cc9e97532d3b89d3a7\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 13 00:57:52.155666 containerd[1589]: time="2026-03-13T00:57:52.155382064Z" level=info msg="Container f4930d93e6985a7020bc7188a58aa043787387773ccc4eb99f99840456ffa69c: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:57:52.174384 containerd[1589]: time="2026-03-13T00:57:52.174120276Z" level=info msg="CreateContainer within sandbox \"47f8e7174c11fd4858ded70d4a46d2d69afc6763d96321cc9e97532d3b89d3a7\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f4930d93e6985a7020bc7188a58aa043787387773ccc4eb99f99840456ffa69c\"" Mar 13 00:57:52.176453 containerd[1589]: time="2026-03-13T00:57:52.176327681Z" level=info msg="StartContainer for \"f4930d93e6985a7020bc7188a58aa043787387773ccc4eb99f99840456ffa69c\"" Mar 13 00:57:52.179647 containerd[1589]: time="2026-03-13T00:57:52.179242562Z" level=info msg="connecting to shim f4930d93e6985a7020bc7188a58aa043787387773ccc4eb99f99840456ffa69c" address="unix:///run/containerd/s/581f192413c32e637e8e915b64f5b4ab0ddcad440b94304c47e1f3a70817ff35" protocol=ttrpc version=3 Mar 13 00:57:52.245082 systemd[1]: Started cri-containerd-f4930d93e6985a7020bc7188a58aa043787387773ccc4eb99f99840456ffa69c.scope - libcontainer container f4930d93e6985a7020bc7188a58aa043787387773ccc4eb99f99840456ffa69c. Mar 13 00:57:52.412869 containerd[1589]: time="2026-03-13T00:57:52.412465183Z" level=info msg="StartContainer for \"f4930d93e6985a7020bc7188a58aa043787387773ccc4eb99f99840456ffa69c\" returns successfully" Mar 13 00:57:52.919080 kubelet[2809]: E0313 00:57:52.918976 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:52.919080 kubelet[2809]: W0313 00:57:52.919002 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:52.919080 kubelet[2809]: E0313 00:57:52.919021 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:52.920450 kubelet[2809]: E0313 00:57:52.920260 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:52.920450 kubelet[2809]: W0313 00:57:52.920373 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:52.920450 kubelet[2809]: E0313 00:57:52.920396 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:52.921296 kubelet[2809]: E0313 00:57:52.921117 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:52.921727 kubelet[2809]: W0313 00:57:52.921447 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:52.921963 kubelet[2809]: E0313 00:57:52.921901 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:52.922282 kubelet[2809]: E0313 00:57:52.922268 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:52.922349 kubelet[2809]: W0313 00:57:52.922337 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:52.922409 kubelet[2809]: E0313 00:57:52.922397 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:52.923177 kubelet[2809]: E0313 00:57:52.923107 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:52.923177 kubelet[2809]: W0313 00:57:52.923122 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:52.923177 kubelet[2809]: E0313 00:57:52.923134 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:52.924207 kubelet[2809]: E0313 00:57:52.924192 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:52.924330 kubelet[2809]: W0313 00:57:52.924269 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:52.924330 kubelet[2809]: E0313 00:57:52.924285 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:52.925008 kubelet[2809]: E0313 00:57:52.924930 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:52.925008 kubelet[2809]: W0313 00:57:52.924948 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:52.925008 kubelet[2809]: E0313 00:57:52.924959 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:52.925942 kubelet[2809]: E0313 00:57:52.925927 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:52.926013 kubelet[2809]: W0313 00:57:52.926002 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:52.926082 kubelet[2809]: E0313 00:57:52.926062 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:52.927629 kubelet[2809]: E0313 00:57:52.927416 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:52.927629 kubelet[2809]: W0313 00:57:52.927432 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:52.927629 kubelet[2809]: E0313 00:57:52.927445 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:52.927914 kubelet[2809]: E0313 00:57:52.927886 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:52.927914 kubelet[2809]: W0313 00:57:52.927899 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:52.927914 kubelet[2809]: E0313 00:57:52.927909 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:52.928615 kubelet[2809]: E0313 00:57:52.928105 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:52.928615 kubelet[2809]: W0313 00:57:52.928197 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:52.928615 kubelet[2809]: E0313 00:57:52.928301 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:52.929089 kubelet[2809]: E0313 00:57:52.928995 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:52.929089 kubelet[2809]: W0313 00:57:52.929088 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:52.929166 kubelet[2809]: E0313 00:57:52.929102 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:52.931047 kubelet[2809]: E0313 00:57:52.930924 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:52.931047 kubelet[2809]: W0313 00:57:52.931003 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:52.931047 kubelet[2809]: E0313 00:57:52.931015 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:52.933932 kubelet[2809]: E0313 00:57:52.933766 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:52.933932 kubelet[2809]: W0313 00:57:52.933863 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:52.933932 kubelet[2809]: E0313 00:57:52.933877 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:52.934654 kubelet[2809]: E0313 00:57:52.934638 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:52.934729 kubelet[2809]: W0313 00:57:52.934717 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:52.934868 kubelet[2809]: E0313 00:57:52.934851 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:52.935436 kubelet[2809]: E0313 00:57:52.935421 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:52.935636 kubelet[2809]: W0313 00:57:52.935621 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:52.935706 kubelet[2809]: E0313 00:57:52.935693 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:52.937965 kubelet[2809]: E0313 00:57:52.937950 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:52.938027 kubelet[2809]: W0313 00:57:52.938015 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:52.938073 kubelet[2809]: E0313 00:57:52.938063 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:52.941126 kubelet[2809]: E0313 00:57:52.941104 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:52.945923 kubelet[2809]: W0313 00:57:52.944008 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:52.945923 kubelet[2809]: E0313 00:57:52.944029 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:52.945923 kubelet[2809]: I0313 00:57:52.944897 2809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-69f997f8b9-qs5n2" podStartSLOduration=2.3930424820000002 podStartE2EDuration="5.944880041s" podCreationTimestamp="2026-03-13 00:57:47 +0000 UTC" firstStartedPulling="2026-03-13 00:57:48.543042384 +0000 UTC m=+24.579437284" lastFinishedPulling="2026-03-13 00:57:52.094879943 +0000 UTC m=+28.131274843" observedRunningTime="2026-03-13 00:57:52.941019055 +0000 UTC m=+28.977413965" watchObservedRunningTime="2026-03-13 00:57:52.944880041 +0000 UTC m=+28.981274941" Mar 13 00:57:52.945923 kubelet[2809]: E0313 00:57:52.945760 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:52.945923 kubelet[2809]: W0313 00:57:52.945851 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:52.945923 kubelet[2809]: E0313 00:57:52.945866 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:52.948363 kubelet[2809]: E0313 00:57:52.948247 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:52.948363 kubelet[2809]: W0313 00:57:52.948339 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:52.949606 kubelet[2809]: E0313 00:57:52.949163 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:52.952289 kubelet[2809]: E0313 00:57:52.952026 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:52.952289 kubelet[2809]: W0313 00:57:52.952105 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:52.952289 kubelet[2809]: E0313 00:57:52.952124 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:52.953683 kubelet[2809]: E0313 00:57:52.953437 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:52.954252 kubelet[2809]: W0313 00:57:52.953450 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:52.957456 kubelet[2809]: E0313 00:57:52.956662 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:52.959405 kubelet[2809]: E0313 00:57:52.957865 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:52.959405 kubelet[2809]: W0313 00:57:52.957879 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:52.959405 kubelet[2809]: E0313 00:57:52.957894 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:52.961622 kubelet[2809]: E0313 00:57:52.961333 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:52.961622 kubelet[2809]: W0313 00:57:52.961438 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:52.961622 kubelet[2809]: E0313 00:57:52.961459 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:52.965353 kubelet[2809]: E0313 00:57:52.965093 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:52.965353 kubelet[2809]: W0313 00:57:52.965199 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:52.965353 kubelet[2809]: E0313 00:57:52.965219 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:52.968639 kubelet[2809]: E0313 00:57:52.967995 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:52.968639 kubelet[2809]: W0313 00:57:52.968019 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:52.968639 kubelet[2809]: E0313 00:57:52.968041 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:52.970262 kubelet[2809]: E0313 00:57:52.970098 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:52.971633 kubelet[2809]: W0313 00:57:52.970323 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:52.971633 kubelet[2809]: E0313 00:57:52.970353 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:52.979180 kubelet[2809]: E0313 00:57:52.979043 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:52.979180 kubelet[2809]: W0313 00:57:52.979160 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:52.979287 kubelet[2809]: E0313 00:57:52.979209 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:52.982251 kubelet[2809]: E0313 00:57:52.982183 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:52.982251 kubelet[2809]: W0313 00:57:52.982205 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:52.982251 kubelet[2809]: E0313 00:57:52.982224 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:52.984381 kubelet[2809]: E0313 00:57:52.984165 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:52.984381 kubelet[2809]: W0313 00:57:52.984302 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:52.984381 kubelet[2809]: E0313 00:57:52.984332 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:52.986064 kubelet[2809]: E0313 00:57:52.985932 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:52.986064 kubelet[2809]: W0313 00:57:52.985959 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:52.986064 kubelet[2809]: E0313 00:57:52.985979 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:52.988991 kubelet[2809]: E0313 00:57:52.988661 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:52.988991 kubelet[2809]: W0313 00:57:52.988679 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:52.988991 kubelet[2809]: E0313 00:57:52.988694 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:52.989987 kubelet[2809]: E0313 00:57:52.989887 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:57:52.989987 kubelet[2809]: W0313 00:57:52.989903 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:57:52.989987 kubelet[2809]: E0313 00:57:52.989916 2809 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:57:53.264002 kubelet[2809]: E0313 00:57:53.263703 2809 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f5h27" podUID="ff8f62bf-6443-4de0-9c1a-1a7823facea3" Mar 13 00:57:53.481761 containerd[1589]: time="2026-03-13T00:57:53.481459982Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:57:53.484891 containerd[1589]: time="2026-03-13T00:57:53.483169629Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 13 00:57:53.487756 containerd[1589]: time="2026-03-13T00:57:53.487462932Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:57:53.495727 containerd[1589]: time="2026-03-13T00:57:53.495279768Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:57:53.496670 containerd[1589]: time="2026-03-13T00:57:53.496317463Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.400569266s" Mar 13 00:57:53.496670 containerd[1589]: time="2026-03-13T00:57:53.496422238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 13 00:57:53.507030 containerd[1589]: time="2026-03-13T00:57:53.506390198Z" level=info msg="CreateContainer within sandbox \"5645dd89237b8ca18cb1bc432a0ecdeb784dfe2fa9abda5da65993a82d72a15c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 13 00:57:53.528862 containerd[1589]: time="2026-03-13T00:57:53.527694827Z" level=info msg="Container b61a17989d3096732f9b2fc5dcdd99efa5aaead0e255bda34de46ddf6a7b67aa: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:57:53.546438 containerd[1589]: time="2026-03-13T00:57:53.546270372Z" level=info msg="CreateContainer within sandbox \"5645dd89237b8ca18cb1bc432a0ecdeb784dfe2fa9abda5da65993a82d72a15c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b61a17989d3096732f9b2fc5dcdd99efa5aaead0e255bda34de46ddf6a7b67aa\"" Mar 13 00:57:53.549098 containerd[1589]: time="2026-03-13T00:57:53.548746144Z" level=info msg="StartContainer for \"b61a17989d3096732f9b2fc5dcdd99efa5aaead0e255bda34de46ddf6a7b67aa\"" Mar 13 00:57:53.553210 containerd[1589]: time="2026-03-13T00:57:53.553128558Z" level=info msg="connecting to shim b61a17989d3096732f9b2fc5dcdd99efa5aaead0e255bda34de46ddf6a7b67aa" address="unix:///run/containerd/s/9aba69cbdd2a232339cbadcf2a9f60c2523017e704c2ddf64e93f202c38f059e" protocol=ttrpc version=3 Mar 13 00:57:53.640062 systemd[1]: Started cri-containerd-b61a17989d3096732f9b2fc5dcdd99efa5aaead0e255bda34de46ddf6a7b67aa.scope - libcontainer container b61a17989d3096732f9b2fc5dcdd99efa5aaead0e255bda34de46ddf6a7b67aa. Mar 13 00:57:53.845362 containerd[1589]: time="2026-03-13T00:57:53.844717379Z" level=info msg="StartContainer for \"b61a17989d3096732f9b2fc5dcdd99efa5aaead0e255bda34de46ddf6a7b67aa\" returns successfully" Mar 13 00:57:53.891410 systemd[1]: cri-containerd-b61a17989d3096732f9b2fc5dcdd99efa5aaead0e255bda34de46ddf6a7b67aa.scope: Deactivated successfully. Mar 13 00:57:53.902367 containerd[1589]: time="2026-03-13T00:57:53.902258002Z" level=info msg="received container exit event container_id:\"b61a17989d3096732f9b2fc5dcdd99efa5aaead0e255bda34de46ddf6a7b67aa\" id:\"b61a17989d3096732f9b2fc5dcdd99efa5aaead0e255bda34de46ddf6a7b67aa\" pid:3494 exited_at:{seconds:1773363473 nanos:896621650}" Mar 13 00:57:53.922017 kubelet[2809]: I0313 00:57:53.921691 2809 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:57:54.025378 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b61a17989d3096732f9b2fc5dcdd99efa5aaead0e255bda34de46ddf6a7b67aa-rootfs.mount: Deactivated successfully. Mar 13 00:57:54.933687 containerd[1589]: time="2026-03-13T00:57:54.933244335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 13 00:57:55.256890 kubelet[2809]: E0313 00:57:55.256743 2809 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f5h27" podUID="ff8f62bf-6443-4de0-9c1a-1a7823facea3" Mar 13 00:57:57.257666 kubelet[2809]: E0313 00:57:57.257132 2809 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f5h27" podUID="ff8f62bf-6443-4de0-9c1a-1a7823facea3" Mar 13 00:57:59.257924 kubelet[2809]: E0313 00:57:59.257179 2809 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f5h27" podUID="ff8f62bf-6443-4de0-9c1a-1a7823facea3" Mar 13 00:58:01.259367 kubelet[2809]: E0313 00:58:01.259207 2809 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f5h27" podUID="ff8f62bf-6443-4de0-9c1a-1a7823facea3" Mar 13 00:58:03.258143 kubelet[2809]: E0313 00:58:03.258068 2809 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f5h27" podUID="ff8f62bf-6443-4de0-9c1a-1a7823facea3" Mar 13 00:58:05.258255 kubelet[2809]: E0313 00:58:05.258188 2809 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f5h27" podUID="ff8f62bf-6443-4de0-9c1a-1a7823facea3" Mar 13 00:58:05.590059 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount776646088.mount: Deactivated successfully. Mar 13 00:58:05.666200 containerd[1589]: time="2026-03-13T00:58:05.665910059Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:58:05.668092 containerd[1589]: time="2026-03-13T00:58:05.667888294Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 13 00:58:05.670045 containerd[1589]: time="2026-03-13T00:58:05.669977708Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:58:05.675658 containerd[1589]: time="2026-03-13T00:58:05.675381732Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:58:05.676091 containerd[1589]: time="2026-03-13T00:58:05.675953252Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 10.742670466s" Mar 13 00:58:05.676762 containerd[1589]: time="2026-03-13T00:58:05.676621961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 13 00:58:05.695809 containerd[1589]: time="2026-03-13T00:58:05.695043015Z" level=info msg="CreateContainer within sandbox \"5645dd89237b8ca18cb1bc432a0ecdeb784dfe2fa9abda5da65993a82d72a15c\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 13 00:58:05.715400 containerd[1589]: time="2026-03-13T00:58:05.715293772Z" level=info msg="Container ad40e0ce97524bfbe8a462d09ca3f1ba8ecd2e5afe552ced52c3edd97ccfda07: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:58:05.767243 containerd[1589]: time="2026-03-13T00:58:05.767033555Z" level=info msg="CreateContainer within sandbox \"5645dd89237b8ca18cb1bc432a0ecdeb784dfe2fa9abda5da65993a82d72a15c\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"ad40e0ce97524bfbe8a462d09ca3f1ba8ecd2e5afe552ced52c3edd97ccfda07\"" Mar 13 00:58:05.768896 containerd[1589]: time="2026-03-13T00:58:05.768781432Z" level=info msg="StartContainer for \"ad40e0ce97524bfbe8a462d09ca3f1ba8ecd2e5afe552ced52c3edd97ccfda07\"" Mar 13 00:58:05.771188 containerd[1589]: time="2026-03-13T00:58:05.771028914Z" level=info msg="connecting to shim ad40e0ce97524bfbe8a462d09ca3f1ba8ecd2e5afe552ced52c3edd97ccfda07" address="unix:///run/containerd/s/9aba69cbdd2a232339cbadcf2a9f60c2523017e704c2ddf64e93f202c38f059e" protocol=ttrpc version=3 Mar 13 00:58:05.864892 systemd[1]: Started cri-containerd-ad40e0ce97524bfbe8a462d09ca3f1ba8ecd2e5afe552ced52c3edd97ccfda07.scope - libcontainer container ad40e0ce97524bfbe8a462d09ca3f1ba8ecd2e5afe552ced52c3edd97ccfda07. Mar 13 00:58:06.066888 containerd[1589]: time="2026-03-13T00:58:06.066778181Z" level=info msg="StartContainer for \"ad40e0ce97524bfbe8a462d09ca3f1ba8ecd2e5afe552ced52c3edd97ccfda07\" returns successfully" Mar 13 00:58:06.248397 systemd[1]: cri-containerd-ad40e0ce97524bfbe8a462d09ca3f1ba8ecd2e5afe552ced52c3edd97ccfda07.scope: Deactivated successfully. Mar 13 00:58:06.270881 containerd[1589]: time="2026-03-13T00:58:06.270445718Z" level=info msg="received container exit event container_id:\"ad40e0ce97524bfbe8a462d09ca3f1ba8ecd2e5afe552ced52c3edd97ccfda07\" id:\"ad40e0ce97524bfbe8a462d09ca3f1ba8ecd2e5afe552ced52c3edd97ccfda07\" pid:3552 exited_at:{seconds:1773363486 nanos:253875503}" Mar 13 00:58:06.336276 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad40e0ce97524bfbe8a462d09ca3f1ba8ecd2e5afe552ced52c3edd97ccfda07-rootfs.mount: Deactivated successfully. Mar 13 00:58:06.991359 containerd[1589]: time="2026-03-13T00:58:06.991142939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 13 00:58:07.257191 kubelet[2809]: E0313 00:58:07.256966 2809 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f5h27" podUID="ff8f62bf-6443-4de0-9c1a-1a7823facea3" Mar 13 00:58:09.257062 kubelet[2809]: E0313 00:58:09.256910 2809 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f5h27" podUID="ff8f62bf-6443-4de0-9c1a-1a7823facea3" Mar 13 00:58:11.258012 kubelet[2809]: E0313 00:58:11.257425 2809 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f5h27" podUID="ff8f62bf-6443-4de0-9c1a-1a7823facea3" Mar 13 00:58:12.159128 containerd[1589]: time="2026-03-13T00:58:12.159003345Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:58:12.161100 containerd[1589]: time="2026-03-13T00:58:12.161064147Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 13 00:58:12.164828 containerd[1589]: time="2026-03-13T00:58:12.164421957Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:58:12.170248 containerd[1589]: time="2026-03-13T00:58:12.170225268Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:58:12.171739 containerd[1589]: time="2026-03-13T00:58:12.171378472Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 5.179266908s" Mar 13 00:58:12.171739 containerd[1589]: time="2026-03-13T00:58:12.171466305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 13 00:58:12.186719 containerd[1589]: time="2026-03-13T00:58:12.186009827Z" level=info msg="CreateContainer within sandbox \"5645dd89237b8ca18cb1bc432a0ecdeb784dfe2fa9abda5da65993a82d72a15c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 13 00:58:12.213243 containerd[1589]: time="2026-03-13T00:58:12.204071855Z" level=info msg="Container c8d4887841aa939e01f4e76d46fe0da334c4f497af469879bf58f8994c558b91: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:58:12.230185 containerd[1589]: time="2026-03-13T00:58:12.230099611Z" level=info msg="CreateContainer within sandbox \"5645dd89237b8ca18cb1bc432a0ecdeb784dfe2fa9abda5da65993a82d72a15c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c8d4887841aa939e01f4e76d46fe0da334c4f497af469879bf58f8994c558b91\"" Mar 13 00:58:12.231415 containerd[1589]: time="2026-03-13T00:58:12.231324272Z" level=info msg="StartContainer for \"c8d4887841aa939e01f4e76d46fe0da334c4f497af469879bf58f8994c558b91\"" Mar 13 00:58:12.232971 containerd[1589]: time="2026-03-13T00:58:12.232859840Z" level=info msg="connecting to shim c8d4887841aa939e01f4e76d46fe0da334c4f497af469879bf58f8994c558b91" address="unix:///run/containerd/s/9aba69cbdd2a232339cbadcf2a9f60c2523017e704c2ddf64e93f202c38f059e" protocol=ttrpc version=3 Mar 13 00:58:12.316021 systemd[1]: Started cri-containerd-c8d4887841aa939e01f4e76d46fe0da334c4f497af469879bf58f8994c558b91.scope - libcontainer container c8d4887841aa939e01f4e76d46fe0da334c4f497af469879bf58f8994c558b91. Mar 13 00:58:12.451932 containerd[1589]: time="2026-03-13T00:58:12.451765953Z" level=info msg="StartContainer for \"c8d4887841aa939e01f4e76d46fe0da334c4f497af469879bf58f8994c558b91\" returns successfully" Mar 13 00:58:13.258214 kubelet[2809]: E0313 00:58:13.258085 2809 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f5h27" podUID="ff8f62bf-6443-4de0-9c1a-1a7823facea3" Mar 13 00:58:13.513056 systemd[1]: cri-containerd-c8d4887841aa939e01f4e76d46fe0da334c4f497af469879bf58f8994c558b91.scope: Deactivated successfully. Mar 13 00:58:13.513737 systemd[1]: cri-containerd-c8d4887841aa939e01f4e76d46fe0da334c4f497af469879bf58f8994c558b91.scope: Consumed 1.199s CPU time, 184.8M memory peak, 5.1M read from disk, 177M written to disk. Mar 13 00:58:13.517772 containerd[1589]: time="2026-03-13T00:58:13.517341422Z" level=info msg="received container exit event container_id:\"c8d4887841aa939e01f4e76d46fe0da334c4f497af469879bf58f8994c558b91\" id:\"c8d4887841aa939e01f4e76d46fe0da334c4f497af469879bf58f8994c558b91\" pid:3612 exited_at:{seconds:1773363493 nanos:517019154}" Mar 13 00:58:13.538449 kubelet[2809]: I0313 00:58:13.538420 2809 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 13 00:58:13.584102 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c8d4887841aa939e01f4e76d46fe0da334c4f497af469879bf58f8994c558b91-rootfs.mount: Deactivated successfully. Mar 13 00:58:13.706065 systemd[1]: Created slice kubepods-besteffort-podd66d3609_5c28_4fe9_b409_dba4dd44da00.slice - libcontainer container kubepods-besteffort-podd66d3609_5c28_4fe9_b409_dba4dd44da00.slice. Mar 13 00:58:13.733221 systemd[1]: Created slice kubepods-besteffort-podc521c696_45b4_490b_8992_2fc320e8df9b.slice - libcontainer container kubepods-besteffort-podc521c696_45b4_490b_8992_2fc320e8df9b.slice. Mar 13 00:58:13.746392 systemd[1]: Created slice kubepods-burstable-podd9961465_6f87_4262_94c9_59f9582bd4cd.slice - libcontainer container kubepods-burstable-podd9961465_6f87_4262_94c9_59f9582bd4cd.slice. Mar 13 00:58:13.758444 systemd[1]: Created slice kubepods-besteffort-podc4871750_34d0_4aea_af63_b853a4756c22.slice - libcontainer container kubepods-besteffort-podc4871750_34d0_4aea_af63_b853a4756c22.slice. Mar 13 00:58:13.764899 kubelet[2809]: I0313 00:58:13.764342 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d66d3609-5c28-4fe9-b409-dba4dd44da00-calico-apiserver-certs\") pod \"calico-apiserver-855966fd9b-cct9s\" (UID: \"d66d3609-5c28-4fe9-b409-dba4dd44da00\") " pod="calico-system/calico-apiserver-855966fd9b-cct9s" Mar 13 00:58:13.765675 kubelet[2809]: I0313 00:58:13.765447 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnf8r\" (UniqueName: \"kubernetes.io/projected/d66d3609-5c28-4fe9-b409-dba4dd44da00-kube-api-access-rnf8r\") pod \"calico-apiserver-855966fd9b-cct9s\" (UID: \"d66d3609-5c28-4fe9-b409-dba4dd44da00\") " pod="calico-system/calico-apiserver-855966fd9b-cct9s" Mar 13 00:58:13.768734 systemd[1]: Created slice kubepods-besteffort-podecbde260_ff56_4143_b201_3e59a89f22c5.slice - libcontainer container kubepods-besteffort-podecbde260_ff56_4143_b201_3e59a89f22c5.slice. Mar 13 00:58:13.777701 systemd[1]: Created slice kubepods-burstable-podd33ce537_d1c4_4ba9_ad27_c83c37a2ac38.slice - libcontainer container kubepods-burstable-podd33ce537_d1c4_4ba9_ad27_c83c37a2ac38.slice. Mar 13 00:58:13.792409 systemd[1]: Created slice kubepods-besteffort-pod4a747119_f21e_443b_b222_e61348ddf3e6.slice - libcontainer container kubepods-besteffort-pod4a747119_f21e_443b_b222_e61348ddf3e6.slice. Mar 13 00:58:13.868103 kubelet[2809]: I0313 00:58:13.867775 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v8xb\" (UniqueName: \"kubernetes.io/projected/c4871750-34d0-4aea-af63-b853a4756c22-kube-api-access-5v8xb\") pod \"goldmane-cccfbd5cf-df2cr\" (UID: \"c4871750-34d0-4aea-af63-b853a4756c22\") " pod="calico-system/goldmane-cccfbd5cf-df2cr" Mar 13 00:58:13.868103 kubelet[2809]: I0313 00:58:13.867878 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4871750-34d0-4aea-af63-b853a4756c22-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-df2cr\" (UID: \"c4871750-34d0-4aea-af63-b853a4756c22\") " pod="calico-system/goldmane-cccfbd5cf-df2cr" Mar 13 00:58:13.868103 kubelet[2809]: I0313 00:58:13.867897 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw46s\" (UniqueName: \"kubernetes.io/projected/ecbde260-ff56-4143-b201-3e59a89f22c5-kube-api-access-fw46s\") pod \"calico-apiserver-855966fd9b-8t47p\" (UID: \"ecbde260-ff56-4143-b201-3e59a89f22c5\") " pod="calico-system/calico-apiserver-855966fd9b-8t47p" Mar 13 00:58:13.868103 kubelet[2809]: I0313 00:58:13.867913 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9961465-6f87-4262-94c9-59f9582bd4cd-config-volume\") pod \"coredns-66bc5c9577-x69q7\" (UID: \"d9961465-6f87-4262-94c9-59f9582bd4cd\") " pod="kube-system/coredns-66bc5c9577-x69q7" Mar 13 00:58:13.868103 kubelet[2809]: I0313 00:58:13.867930 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tj5f\" (UniqueName: \"kubernetes.io/projected/d9961465-6f87-4262-94c9-59f9582bd4cd-kube-api-access-9tj5f\") pod \"coredns-66bc5c9577-x69q7\" (UID: \"d9961465-6f87-4262-94c9-59f9582bd4cd\") " pod="kube-system/coredns-66bc5c9577-x69q7" Mar 13 00:58:13.868336 kubelet[2809]: I0313 00:58:13.867966 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ecbde260-ff56-4143-b201-3e59a89f22c5-calico-apiserver-certs\") pod \"calico-apiserver-855966fd9b-8t47p\" (UID: \"ecbde260-ff56-4143-b201-3e59a89f22c5\") " pod="calico-system/calico-apiserver-855966fd9b-8t47p" Mar 13 00:58:13.868336 kubelet[2809]: I0313 00:58:13.867980 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d33ce537-d1c4-4ba9-ad27-c83c37a2ac38-config-volume\") pod \"coredns-66bc5c9577-f8bbj\" (UID: \"d33ce537-d1c4-4ba9-ad27-c83c37a2ac38\") " pod="kube-system/coredns-66bc5c9577-f8bbj" Mar 13 00:58:13.868336 kubelet[2809]: I0313 00:58:13.868003 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtns8\" (UniqueName: \"kubernetes.io/projected/d33ce537-d1c4-4ba9-ad27-c83c37a2ac38-kube-api-access-xtns8\") pod \"coredns-66bc5c9577-f8bbj\" (UID: \"d33ce537-d1c4-4ba9-ad27-c83c37a2ac38\") " pod="kube-system/coredns-66bc5c9577-f8bbj" Mar 13 00:58:13.868336 kubelet[2809]: I0313 00:58:13.868018 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c521c696-45b4-490b-8992-2fc320e8df9b-whisker-backend-key-pair\") pod \"whisker-995bb796-hzqrh\" (UID: \"c521c696-45b4-490b-8992-2fc320e8df9b\") " pod="calico-system/whisker-995bb796-hzqrh" Mar 13 00:58:13.868336 kubelet[2809]: I0313 00:58:13.868039 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/c521c696-45b4-490b-8992-2fc320e8df9b-nginx-config\") pod \"whisker-995bb796-hzqrh\" (UID: \"c521c696-45b4-490b-8992-2fc320e8df9b\") " pod="calico-system/whisker-995bb796-hzqrh" Mar 13 00:58:13.868450 kubelet[2809]: I0313 00:58:13.868051 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c521c696-45b4-490b-8992-2fc320e8df9b-whisker-ca-bundle\") pod \"whisker-995bb796-hzqrh\" (UID: \"c521c696-45b4-490b-8992-2fc320e8df9b\") " pod="calico-system/whisker-995bb796-hzqrh" Mar 13 00:58:13.868450 kubelet[2809]: I0313 00:58:13.868064 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-862lw\" (UniqueName: \"kubernetes.io/projected/c521c696-45b4-490b-8992-2fc320e8df9b-kube-api-access-862lw\") pod \"whisker-995bb796-hzqrh\" (UID: \"c521c696-45b4-490b-8992-2fc320e8df9b\") " pod="calico-system/whisker-995bb796-hzqrh" Mar 13 00:58:13.868450 kubelet[2809]: I0313 00:58:13.868079 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm6rg\" (UniqueName: \"kubernetes.io/projected/4a747119-f21e-443b-b222-e61348ddf3e6-kube-api-access-xm6rg\") pod \"calico-kube-controllers-58cdb8c654-b6w4d\" (UID: \"4a747119-f21e-443b-b222-e61348ddf3e6\") " pod="calico-system/calico-kube-controllers-58cdb8c654-b6w4d" Mar 13 00:58:13.868450 kubelet[2809]: I0313 00:58:13.868094 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4871750-34d0-4aea-af63-b853a4756c22-config\") pod \"goldmane-cccfbd5cf-df2cr\" (UID: \"c4871750-34d0-4aea-af63-b853a4756c22\") " pod="calico-system/goldmane-cccfbd5cf-df2cr" Mar 13 00:58:13.868450 kubelet[2809]: I0313 00:58:13.868110 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a747119-f21e-443b-b222-e61348ddf3e6-tigera-ca-bundle\") pod \"calico-kube-controllers-58cdb8c654-b6w4d\" (UID: \"4a747119-f21e-443b-b222-e61348ddf3e6\") " pod="calico-system/calico-kube-controllers-58cdb8c654-b6w4d" Mar 13 00:58:13.869025 kubelet[2809]: I0313 00:58:13.868123 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c4871750-34d0-4aea-af63-b853a4756c22-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-df2cr\" (UID: \"c4871750-34d0-4aea-af63-b853a4756c22\") " pod="calico-system/goldmane-cccfbd5cf-df2cr" Mar 13 00:58:14.024832 containerd[1589]: time="2026-03-13T00:58:14.024670110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-855966fd9b-cct9s,Uid:d66d3609-5c28-4fe9-b409-dba4dd44da00,Namespace:calico-system,Attempt:0,}" Mar 13 00:58:14.045245 containerd[1589]: time="2026-03-13T00:58:14.044462661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-995bb796-hzqrh,Uid:c521c696-45b4-490b-8992-2fc320e8df9b,Namespace:calico-system,Attempt:0,}" Mar 13 00:58:14.061007 containerd[1589]: time="2026-03-13T00:58:14.060736677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-x69q7,Uid:d9961465-6f87-4262-94c9-59f9582bd4cd,Namespace:kube-system,Attempt:0,}" Mar 13 00:58:14.068908 containerd[1589]: time="2026-03-13T00:58:14.068694240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-df2cr,Uid:c4871750-34d0-4aea-af63-b853a4756c22,Namespace:calico-system,Attempt:0,}" Mar 13 00:58:14.083740 containerd[1589]: time="2026-03-13T00:58:14.082876028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-855966fd9b-8t47p,Uid:ecbde260-ff56-4143-b201-3e59a89f22c5,Namespace:calico-system,Attempt:0,}" Mar 13 00:58:14.109092 containerd[1589]: time="2026-03-13T00:58:14.108885073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-f8bbj,Uid:d33ce537-d1c4-4ba9-ad27-c83c37a2ac38,Namespace:kube-system,Attempt:0,}" Mar 13 00:58:14.111062 containerd[1589]: time="2026-03-13T00:58:14.109280487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58cdb8c654-b6w4d,Uid:4a747119-f21e-443b-b222-e61348ddf3e6,Namespace:calico-system,Attempt:0,}" Mar 13 00:58:14.160575 containerd[1589]: time="2026-03-13T00:58:14.160285862Z" level=info msg="CreateContainer within sandbox \"5645dd89237b8ca18cb1bc432a0ecdeb784dfe2fa9abda5da65993a82d72a15c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 13 00:58:14.249820 containerd[1589]: time="2026-03-13T00:58:14.249281864Z" level=info msg="Container d23798934782081add0d418e344a298a6a2a7bd6140d59cf90554fc532d73cb9: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:58:14.286299 containerd[1589]: time="2026-03-13T00:58:14.286074363Z" level=info msg="CreateContainer within sandbox \"5645dd89237b8ca18cb1bc432a0ecdeb784dfe2fa9abda5da65993a82d72a15c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d23798934782081add0d418e344a298a6a2a7bd6140d59cf90554fc532d73cb9\"" Mar 13 00:58:14.310311 containerd[1589]: time="2026-03-13T00:58:14.310148684Z" level=info msg="StartContainer for \"d23798934782081add0d418e344a298a6a2a7bd6140d59cf90554fc532d73cb9\"" Mar 13 00:58:14.312807 containerd[1589]: time="2026-03-13T00:58:14.312242969Z" level=info msg="connecting to shim d23798934782081add0d418e344a298a6a2a7bd6140d59cf90554fc532d73cb9" address="unix:///run/containerd/s/9aba69cbdd2a232339cbadcf2a9f60c2523017e704c2ddf64e93f202c38f059e" protocol=ttrpc version=3 Mar 13 00:58:14.381956 systemd[1]: Started cri-containerd-d23798934782081add0d418e344a298a6a2a7bd6140d59cf90554fc532d73cb9.scope - libcontainer container d23798934782081add0d418e344a298a6a2a7bd6140d59cf90554fc532d73cb9. Mar 13 00:58:14.452816 containerd[1589]: time="2026-03-13T00:58:14.452595000Z" level=error msg="Failed to destroy network for sandbox \"e321c36d6e68f52ed974b0bc1923a24cede9e859dea3caa3bae842d27b99b6c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:58:14.457975 containerd[1589]: time="2026-03-13T00:58:14.457796620Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-855966fd9b-cct9s,Uid:d66d3609-5c28-4fe9-b409-dba4dd44da00,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e321c36d6e68f52ed974b0bc1923a24cede9e859dea3caa3bae842d27b99b6c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:58:14.479732 kubelet[2809]: E0313 00:58:14.479122 2809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e321c36d6e68f52ed974b0bc1923a24cede9e859dea3caa3bae842d27b99b6c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:58:14.480903 kubelet[2809]: E0313 00:58:14.480676 2809 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e321c36d6e68f52ed974b0bc1923a24cede9e859dea3caa3bae842d27b99b6c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-855966fd9b-cct9s" Mar 13 00:58:14.481726 kubelet[2809]: E0313 00:58:14.481421 2809 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e321c36d6e68f52ed974b0bc1923a24cede9e859dea3caa3bae842d27b99b6c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-855966fd9b-cct9s" Mar 13 00:58:14.484823 kubelet[2809]: E0313 00:58:14.484754 2809 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-855966fd9b-cct9s_calico-system(d66d3609-5c28-4fe9-b409-dba4dd44da00)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-855966fd9b-cct9s_calico-system(d66d3609-5c28-4fe9-b409-dba4dd44da00)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e321c36d6e68f52ed974b0bc1923a24cede9e859dea3caa3bae842d27b99b6c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-855966fd9b-cct9s" podUID="d66d3609-5c28-4fe9-b409-dba4dd44da00" Mar 13 00:58:14.544157 containerd[1589]: time="2026-03-13T00:58:14.543990030Z" level=error msg="Failed to destroy network for sandbox \"545cdc96cb474811f22e159c7bcc116745b7a96399efa2f0d40d916eadc888f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:58:14.548069 containerd[1589]: time="2026-03-13T00:58:14.548039563Z" level=error msg="Failed to destroy network for sandbox \"c4bfef1b9776521d2e5390339a4607f48c0796cd0c3a1d7cd0f5eaa8aff2da5a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:58:14.559317 containerd[1589]: time="2026-03-13T00:58:14.558164436Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-995bb796-hzqrh,Uid:c521c696-45b4-490b-8992-2fc320e8df9b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"545cdc96cb474811f22e159c7bcc116745b7a96399efa2f0d40d916eadc888f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:58:14.561346 kubelet[2809]: E0313 00:58:14.561083 2809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"545cdc96cb474811f22e159c7bcc116745b7a96399efa2f0d40d916eadc888f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:58:14.561346 kubelet[2809]: E0313 00:58:14.561218 2809 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"545cdc96cb474811f22e159c7bcc116745b7a96399efa2f0d40d916eadc888f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-995bb796-hzqrh" Mar 13 00:58:14.561346 kubelet[2809]: E0313 00:58:14.561247 2809 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"545cdc96cb474811f22e159c7bcc116745b7a96399efa2f0d40d916eadc888f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-995bb796-hzqrh" Mar 13 00:58:14.561776 kubelet[2809]: E0313 00:58:14.561295 2809 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-995bb796-hzqrh_calico-system(c521c696-45b4-490b-8992-2fc320e8df9b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-995bb796-hzqrh_calico-system(c521c696-45b4-490b-8992-2fc320e8df9b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"545cdc96cb474811f22e159c7bcc116745b7a96399efa2f0d40d916eadc888f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-995bb796-hzqrh" podUID="c521c696-45b4-490b-8992-2fc320e8df9b" Mar 13 00:58:14.563146 containerd[1589]: time="2026-03-13T00:58:14.563114791Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-855966fd9b-8t47p,Uid:ecbde260-ff56-4143-b201-3e59a89f22c5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4bfef1b9776521d2e5390339a4607f48c0796cd0c3a1d7cd0f5eaa8aff2da5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:58:14.567593 kubelet[2809]: E0313 00:58:14.566679 2809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4bfef1b9776521d2e5390339a4607f48c0796cd0c3a1d7cd0f5eaa8aff2da5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:58:14.567593 kubelet[2809]: E0313 00:58:14.566724 2809 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4bfef1b9776521d2e5390339a4607f48c0796cd0c3a1d7cd0f5eaa8aff2da5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-855966fd9b-8t47p" Mar 13 00:58:14.567593 kubelet[2809]: E0313 00:58:14.566743 2809 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4bfef1b9776521d2e5390339a4607f48c0796cd0c3a1d7cd0f5eaa8aff2da5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-855966fd9b-8t47p" Mar 13 00:58:14.567789 kubelet[2809]: E0313 00:58:14.566788 2809 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-855966fd9b-8t47p_calico-system(ecbde260-ff56-4143-b201-3e59a89f22c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-855966fd9b-8t47p_calico-system(ecbde260-ff56-4143-b201-3e59a89f22c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c4bfef1b9776521d2e5390339a4607f48c0796cd0c3a1d7cd0f5eaa8aff2da5a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-855966fd9b-8t47p" podUID="ecbde260-ff56-4143-b201-3e59a89f22c5" Mar 13 00:58:14.572219 containerd[1589]: time="2026-03-13T00:58:14.571844973Z" level=error msg="Failed to destroy network for sandbox \"651dedafee1ecb27366d197a6ae6842e7f890975a90a9f2edb8d5e9429944f2a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:58:14.572285 containerd[1589]: time="2026-03-13T00:58:14.572264002Z" level=error msg="Failed to destroy network for sandbox \"e50c8837282d53cdd0df4d56a6a4a350bf3f0b1987e95a0c00288370a5159f59\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:58:14.582555 containerd[1589]: time="2026-03-13T00:58:14.582258655Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-x69q7,Uid:d9961465-6f87-4262-94c9-59f9582bd4cd,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e50c8837282d53cdd0df4d56a6a4a350bf3f0b1987e95a0c00288370a5159f59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:58:14.584098 kubelet[2809]: E0313 00:58:14.583887 2809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e50c8837282d53cdd0df4d56a6a4a350bf3f0b1987e95a0c00288370a5159f59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:58:14.584098 kubelet[2809]: E0313 00:58:14.583998 2809 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e50c8837282d53cdd0df4d56a6a4a350bf3f0b1987e95a0c00288370a5159f59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-x69q7" Mar 13 00:58:14.584098 kubelet[2809]: E0313 00:58:14.584020 2809 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e50c8837282d53cdd0df4d56a6a4a350bf3f0b1987e95a0c00288370a5159f59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-x69q7" Mar 13 00:58:14.584262 kubelet[2809]: E0313 00:58:14.584072 2809 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-x69q7_kube-system(d9961465-6f87-4262-94c9-59f9582bd4cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-x69q7_kube-system(d9961465-6f87-4262-94c9-59f9582bd4cd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e50c8837282d53cdd0df4d56a6a4a350bf3f0b1987e95a0c00288370a5159f59\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-x69q7" podUID="d9961465-6f87-4262-94c9-59f9582bd4cd" Mar 13 00:58:14.600457 containerd[1589]: time="2026-03-13T00:58:14.600427656Z" level=error msg="Failed to destroy network for sandbox \"faf70fe7c497a13479f613c24d5a1363c98c06eccac2032bbb8e5504a5f33372\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:58:14.602816 containerd[1589]: time="2026-03-13T00:58:14.601386291Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-df2cr,Uid:c4871750-34d0-4aea-af63-b853a4756c22,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"651dedafee1ecb27366d197a6ae6842e7f890975a90a9f2edb8d5e9429944f2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:58:14.603880 systemd[1]: run-netns-cni\x2df265f710\x2d73e8\x2df99b\x2d0678\x2d7cf04bc21ef4.mount: Deactivated successfully. Mar 13 00:58:14.604145 kubelet[2809]: E0313 00:58:14.604063 2809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"651dedafee1ecb27366d197a6ae6842e7f890975a90a9f2edb8d5e9429944f2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:58:14.604145 kubelet[2809]: E0313 00:58:14.604106 2809 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"651dedafee1ecb27366d197a6ae6842e7f890975a90a9f2edb8d5e9429944f2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-df2cr" Mar 13 00:58:14.604145 kubelet[2809]: E0313 00:58:14.604122 2809 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"651dedafee1ecb27366d197a6ae6842e7f890975a90a9f2edb8d5e9429944f2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-df2cr" Mar 13 00:58:14.604730 kubelet[2809]: E0313 00:58:14.604235 2809 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-df2cr_calico-system(c4871750-34d0-4aea-af63-b853a4756c22)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-df2cr_calico-system(c4871750-34d0-4aea-af63-b853a4756c22)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"651dedafee1ecb27366d197a6ae6842e7f890975a90a9f2edb8d5e9429944f2a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-df2cr" podUID="c4871750-34d0-4aea-af63-b853a4756c22" Mar 13 00:58:14.611717 containerd[1589]: time="2026-03-13T00:58:14.611669238Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-f8bbj,Uid:d33ce537-d1c4-4ba9-ad27-c83c37a2ac38,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"faf70fe7c497a13479f613c24d5a1363c98c06eccac2032bbb8e5504a5f33372\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:58:14.611962 kubelet[2809]: E0313 00:58:14.611888 2809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"faf70fe7c497a13479f613c24d5a1363c98c06eccac2032bbb8e5504a5f33372\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:58:14.611962 kubelet[2809]: E0313 00:58:14.611922 2809 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"faf70fe7c497a13479f613c24d5a1363c98c06eccac2032bbb8e5504a5f33372\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-f8bbj" Mar 13 00:58:14.611962 kubelet[2809]: E0313 00:58:14.611942 2809 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"faf70fe7c497a13479f613c24d5a1363c98c06eccac2032bbb8e5504a5f33372\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-f8bbj" Mar 13 00:58:14.612051 kubelet[2809]: E0313 00:58:14.611974 2809 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-f8bbj_kube-system(d33ce537-d1c4-4ba9-ad27-c83c37a2ac38)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-f8bbj_kube-system(d33ce537-d1c4-4ba9-ad27-c83c37a2ac38)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"faf70fe7c497a13479f613c24d5a1363c98c06eccac2032bbb8e5504a5f33372\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-f8bbj" podUID="d33ce537-d1c4-4ba9-ad27-c83c37a2ac38" Mar 13 00:58:14.631775 containerd[1589]: time="2026-03-13T00:58:14.631747311Z" level=error msg="Failed to destroy network for sandbox \"d70b1d919cb57115e66ad8624626dc06d11d8026dcf65e717cdb90bd67a076c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:58:14.634200 systemd[1]: run-netns-cni\x2daddc3474\x2d4d4c\x2da0a7\x2dc97e\x2dc31ad53e4578.mount: Deactivated successfully. Mar 13 00:58:14.639384 containerd[1589]: time="2026-03-13T00:58:14.639026805Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58cdb8c654-b6w4d,Uid:4a747119-f21e-443b-b222-e61348ddf3e6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d70b1d919cb57115e66ad8624626dc06d11d8026dcf65e717cdb90bd67a076c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:58:14.640085 kubelet[2809]: E0313 00:58:14.639760 2809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d70b1d919cb57115e66ad8624626dc06d11d8026dcf65e717cdb90bd67a076c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:58:14.640085 kubelet[2809]: E0313 00:58:14.639865 2809 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d70b1d919cb57115e66ad8624626dc06d11d8026dcf65e717cdb90bd67a076c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58cdb8c654-b6w4d" Mar 13 00:58:14.640085 kubelet[2809]: E0313 00:58:14.639885 2809 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d70b1d919cb57115e66ad8624626dc06d11d8026dcf65e717cdb90bd67a076c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58cdb8c654-b6w4d" Mar 13 00:58:14.640185 kubelet[2809]: E0313 00:58:14.639922 2809 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-58cdb8c654-b6w4d_calico-system(4a747119-f21e-443b-b222-e61348ddf3e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-58cdb8c654-b6w4d_calico-system(4a747119-f21e-443b-b222-e61348ddf3e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d70b1d919cb57115e66ad8624626dc06d11d8026dcf65e717cdb90bd67a076c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-58cdb8c654-b6w4d" podUID="4a747119-f21e-443b-b222-e61348ddf3e6" Mar 13 00:58:14.662110 containerd[1589]: time="2026-03-13T00:58:14.662009774Z" level=info msg="StartContainer for \"d23798934782081add0d418e344a298a6a2a7bd6140d59cf90554fc532d73cb9\" returns successfully" Mar 13 00:58:15.185065 kubelet[2809]: I0313 00:58:15.184790 2809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7mdcm" podStartSLOduration=4.92230861 podStartE2EDuration="28.184768272s" podCreationTimestamp="2026-03-13 00:57:47 +0000 UTC" firstStartedPulling="2026-03-13 00:57:48.910849546 +0000 UTC m=+24.947244446" lastFinishedPulling="2026-03-13 00:58:12.173309207 +0000 UTC m=+48.209704108" observedRunningTime="2026-03-13 00:58:15.182920527 +0000 UTC m=+51.219315427" watchObservedRunningTime="2026-03-13 00:58:15.184768272 +0000 UTC m=+51.221163172" Mar 13 00:58:15.186267 kubelet[2809]: I0313 00:58:15.185857 2809 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c521c696-45b4-490b-8992-2fc320e8df9b-whisker-backend-key-pair\") pod \"c521c696-45b4-490b-8992-2fc320e8df9b\" (UID: \"c521c696-45b4-490b-8992-2fc320e8df9b\") " Mar 13 00:58:15.186267 kubelet[2809]: I0313 00:58:15.185946 2809 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c521c696-45b4-490b-8992-2fc320e8df9b-whisker-ca-bundle\") pod \"c521c696-45b4-490b-8992-2fc320e8df9b\" (UID: \"c521c696-45b4-490b-8992-2fc320e8df9b\") " Mar 13 00:58:15.186267 kubelet[2809]: I0313 00:58:15.185966 2809 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/c521c696-45b4-490b-8992-2fc320e8df9b-nginx-config\") pod \"c521c696-45b4-490b-8992-2fc320e8df9b\" (UID: \"c521c696-45b4-490b-8992-2fc320e8df9b\") " Mar 13 00:58:15.190123 kubelet[2809]: I0313 00:58:15.188863 2809 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c521c696-45b4-490b-8992-2fc320e8df9b-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "c521c696-45b4-490b-8992-2fc320e8df9b" (UID: "c521c696-45b4-490b-8992-2fc320e8df9b"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 13 00:58:15.190123 kubelet[2809]: I0313 00:58:15.189449 2809 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c521c696-45b4-490b-8992-2fc320e8df9b-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c521c696-45b4-490b-8992-2fc320e8df9b" (UID: "c521c696-45b4-490b-8992-2fc320e8df9b"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 13 00:58:15.208256 systemd[1]: var-lib-kubelet-pods-c521c696\x2d45b4\x2d490b\x2d8992\x2d2fc320e8df9b-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 13 00:58:15.209767 kubelet[2809]: I0313 00:58:15.209667 2809 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c521c696-45b4-490b-8992-2fc320e8df9b-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c521c696-45b4-490b-8992-2fc320e8df9b" (UID: "c521c696-45b4-490b-8992-2fc320e8df9b"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 13 00:58:15.270006 systemd[1]: Created slice kubepods-besteffort-podff8f62bf_6443_4de0_9c1a_1a7823facea3.slice - libcontainer container kubepods-besteffort-podff8f62bf_6443_4de0_9c1a_1a7823facea3.slice. Mar 13 00:58:15.281957 containerd[1589]: time="2026-03-13T00:58:15.281726562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f5h27,Uid:ff8f62bf-6443-4de0-9c1a-1a7823facea3,Namespace:calico-system,Attempt:0,}" Mar 13 00:58:15.293377 kubelet[2809]: I0313 00:58:15.292842 2809 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-862lw\" (UniqueName: \"kubernetes.io/projected/c521c696-45b4-490b-8992-2fc320e8df9b-kube-api-access-862lw\") pod \"c521c696-45b4-490b-8992-2fc320e8df9b\" (UID: \"c521c696-45b4-490b-8992-2fc320e8df9b\") " Mar 13 00:58:15.293377 kubelet[2809]: I0313 00:58:15.292950 2809 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c521c696-45b4-490b-8992-2fc320e8df9b-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Mar 13 00:58:15.293377 kubelet[2809]: I0313 00:58:15.292961 2809 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c521c696-45b4-490b-8992-2fc320e8df9b-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 13 00:58:15.293377 kubelet[2809]: I0313 00:58:15.292970 2809 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/c521c696-45b4-490b-8992-2fc320e8df9b-nginx-config\") on node \"localhost\" DevicePath \"\"" Mar 13 00:58:15.312981 kubelet[2809]: I0313 00:58:15.312925 2809 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c521c696-45b4-490b-8992-2fc320e8df9b-kube-api-access-862lw" (OuterVolumeSpecName: "kube-api-access-862lw") pod "c521c696-45b4-490b-8992-2fc320e8df9b" (UID: "c521c696-45b4-490b-8992-2fc320e8df9b"). InnerVolumeSpecName "kube-api-access-862lw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 13 00:58:15.394864 kubelet[2809]: I0313 00:58:15.394587 2809 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-862lw\" (UniqueName: \"kubernetes.io/projected/c521c696-45b4-490b-8992-2fc320e8df9b-kube-api-access-862lw\") on node \"localhost\" DevicePath \"\"" Mar 13 00:58:15.456875 kubelet[2809]: I0313 00:58:15.456706 2809 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:58:15.584352 systemd[1]: var-lib-kubelet-pods-c521c696\x2d45b4\x2d490b\x2d8992\x2d2fc320e8df9b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d862lw.mount: Deactivated successfully. Mar 13 00:58:15.600898 systemd-networkd[1444]: calibe32ebd45af: Link UP Mar 13 00:58:15.601821 systemd-networkd[1444]: calibe32ebd45af: Gained carrier Mar 13 00:58:15.630674 containerd[1589]: 2026-03-13 00:58:15.363 [ERROR][3931] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 13 00:58:15.630674 containerd[1589]: 2026-03-13 00:58:15.413 [INFO][3931] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--f5h27-eth0 csi-node-driver- calico-system ff8f62bf-6443-4de0-9c1a-1a7823facea3 742 0 2026-03-13 00:57:48 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-f5h27 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibe32ebd45af [] [] }} ContainerID="5327d21aae97973b75391114706b3c491eddfeb0e04ef9aa685232a6b5ed2fe2" Namespace="calico-system" Pod="csi-node-driver-f5h27" WorkloadEndpoint="localhost-k8s-csi--node--driver--f5h27-" Mar 13 00:58:15.630674 containerd[1589]: 2026-03-13 00:58:15.414 [INFO][3931] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5327d21aae97973b75391114706b3c491eddfeb0e04ef9aa685232a6b5ed2fe2" Namespace="calico-system" Pod="csi-node-driver-f5h27" WorkloadEndpoint="localhost-k8s-csi--node--driver--f5h27-eth0" Mar 13 00:58:15.630674 containerd[1589]: 2026-03-13 00:58:15.470 [INFO][3954] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5327d21aae97973b75391114706b3c491eddfeb0e04ef9aa685232a6b5ed2fe2" HandleID="k8s-pod-network.5327d21aae97973b75391114706b3c491eddfeb0e04ef9aa685232a6b5ed2fe2" Workload="localhost-k8s-csi--node--driver--f5h27-eth0" Mar 13 00:58:15.631399 containerd[1589]: 2026-03-13 00:58:15.480 [INFO][3954] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5327d21aae97973b75391114706b3c491eddfeb0e04ef9aa685232a6b5ed2fe2" HandleID="k8s-pod-network.5327d21aae97973b75391114706b3c491eddfeb0e04ef9aa685232a6b5ed2fe2" Workload="localhost-k8s-csi--node--driver--f5h27-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fd930), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-f5h27", "timestamp":"2026-03-13 00:58:15.470800828 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00068a9a0)} Mar 13 00:58:15.631399 containerd[1589]: 2026-03-13 00:58:15.480 [INFO][3954] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:58:15.631399 containerd[1589]: 2026-03-13 00:58:15.480 [INFO][3954] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:58:15.631399 containerd[1589]: 2026-03-13 00:58:15.480 [INFO][3954] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 13 00:58:15.631399 containerd[1589]: 2026-03-13 00:58:15.488 [INFO][3954] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5327d21aae97973b75391114706b3c491eddfeb0e04ef9aa685232a6b5ed2fe2" host="localhost" Mar 13 00:58:15.631399 containerd[1589]: 2026-03-13 00:58:15.508 [INFO][3954] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 13 00:58:15.631399 containerd[1589]: 2026-03-13 00:58:15.526 [INFO][3954] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 13 00:58:15.631399 containerd[1589]: 2026-03-13 00:58:15.533 [INFO][3954] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 13 00:58:15.631399 containerd[1589]: 2026-03-13 00:58:15.540 [INFO][3954] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 13 00:58:15.631399 containerd[1589]: 2026-03-13 00:58:15.540 [INFO][3954] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5327d21aae97973b75391114706b3c491eddfeb0e04ef9aa685232a6b5ed2fe2" host="localhost" Mar 13 00:58:15.631854 containerd[1589]: 2026-03-13 00:58:15.545 [INFO][3954] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5327d21aae97973b75391114706b3c491eddfeb0e04ef9aa685232a6b5ed2fe2 Mar 13 00:58:15.631854 containerd[1589]: 2026-03-13 00:58:15.556 [INFO][3954] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5327d21aae97973b75391114706b3c491eddfeb0e04ef9aa685232a6b5ed2fe2" host="localhost" Mar 13 00:58:15.631854 containerd[1589]: 2026-03-13 00:58:15.569 [INFO][3954] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.5327d21aae97973b75391114706b3c491eddfeb0e04ef9aa685232a6b5ed2fe2" host="localhost" Mar 13 00:58:15.631854 containerd[1589]: 2026-03-13 00:58:15.569 [INFO][3954] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.5327d21aae97973b75391114706b3c491eddfeb0e04ef9aa685232a6b5ed2fe2" host="localhost" Mar 13 00:58:15.631854 containerd[1589]: 2026-03-13 00:58:15.569 [INFO][3954] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:58:15.631854 containerd[1589]: 2026-03-13 00:58:15.569 [INFO][3954] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="5327d21aae97973b75391114706b3c491eddfeb0e04ef9aa685232a6b5ed2fe2" HandleID="k8s-pod-network.5327d21aae97973b75391114706b3c491eddfeb0e04ef9aa685232a6b5ed2fe2" Workload="localhost-k8s-csi--node--driver--f5h27-eth0" Mar 13 00:58:15.631981 containerd[1589]: 2026-03-13 00:58:15.575 [INFO][3931] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5327d21aae97973b75391114706b3c491eddfeb0e04ef9aa685232a6b5ed2fe2" Namespace="calico-system" Pod="csi-node-driver-f5h27" WorkloadEndpoint="localhost-k8s-csi--node--driver--f5h27-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--f5h27-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ff8f62bf-6443-4de0-9c1a-1a7823facea3", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 57, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-f5h27", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibe32ebd45af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:58:15.632139 containerd[1589]: 2026-03-13 00:58:15.575 [INFO][3931] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="5327d21aae97973b75391114706b3c491eddfeb0e04ef9aa685232a6b5ed2fe2" Namespace="calico-system" Pod="csi-node-driver-f5h27" WorkloadEndpoint="localhost-k8s-csi--node--driver--f5h27-eth0" Mar 13 00:58:15.632139 containerd[1589]: 2026-03-13 00:58:15.575 [INFO][3931] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibe32ebd45af ContainerID="5327d21aae97973b75391114706b3c491eddfeb0e04ef9aa685232a6b5ed2fe2" Namespace="calico-system" Pod="csi-node-driver-f5h27" WorkloadEndpoint="localhost-k8s-csi--node--driver--f5h27-eth0" Mar 13 00:58:15.632139 containerd[1589]: 2026-03-13 00:58:15.602 [INFO][3931] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5327d21aae97973b75391114706b3c491eddfeb0e04ef9aa685232a6b5ed2fe2" Namespace="calico-system" Pod="csi-node-driver-f5h27" WorkloadEndpoint="localhost-k8s-csi--node--driver--f5h27-eth0" Mar 13 00:58:15.632212 containerd[1589]: 2026-03-13 00:58:15.602 [INFO][3931] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5327d21aae97973b75391114706b3c491eddfeb0e04ef9aa685232a6b5ed2fe2" Namespace="calico-system" Pod="csi-node-driver-f5h27" WorkloadEndpoint="localhost-k8s-csi--node--driver--f5h27-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--f5h27-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ff8f62bf-6443-4de0-9c1a-1a7823facea3", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 57, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5327d21aae97973b75391114706b3c491eddfeb0e04ef9aa685232a6b5ed2fe2", Pod:"csi-node-driver-f5h27", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibe32ebd45af", MAC:"e6:6b:07:d2:30:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:58:15.632361 containerd[1589]: 2026-03-13 00:58:15.623 [INFO][3931] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5327d21aae97973b75391114706b3c491eddfeb0e04ef9aa685232a6b5ed2fe2" Namespace="calico-system" Pod="csi-node-driver-f5h27" WorkloadEndpoint="localhost-k8s-csi--node--driver--f5h27-eth0" Mar 13 00:58:15.703072 containerd[1589]: time="2026-03-13T00:58:15.702974506Z" level=info msg="connecting to shim 5327d21aae97973b75391114706b3c491eddfeb0e04ef9aa685232a6b5ed2fe2" address="unix:///run/containerd/s/1faf15f20815b81b68a9505d18fe53aca0a3fc5f7e0e5933c4f037d8ece1e8e3" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:58:15.759886 systemd[1]: Started cri-containerd-5327d21aae97973b75391114706b3c491eddfeb0e04ef9aa685232a6b5ed2fe2.scope - libcontainer container 5327d21aae97973b75391114706b3c491eddfeb0e04ef9aa685232a6b5ed2fe2. Mar 13 00:58:15.783068 systemd-resolved[1447]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 13 00:58:15.836087 containerd[1589]: time="2026-03-13T00:58:15.835973000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f5h27,Uid:ff8f62bf-6443-4de0-9c1a-1a7823facea3,Namespace:calico-system,Attempt:0,} returns sandbox id \"5327d21aae97973b75391114706b3c491eddfeb0e04ef9aa685232a6b5ed2fe2\"" Mar 13 00:58:15.839817 containerd[1589]: time="2026-03-13T00:58:15.839462989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 13 00:58:16.120243 systemd[1]: Removed slice kubepods-besteffort-podc521c696_45b4_490b_8992_2fc320e8df9b.slice - libcontainer container kubepods-besteffort-podc521c696_45b4_490b_8992_2fc320e8df9b.slice. Mar 13 00:58:16.288244 systemd[1]: Created slice kubepods-besteffort-pod73e04890_5e9f_46cc_99e4_f33aed754ef8.slice - libcontainer container kubepods-besteffort-pod73e04890_5e9f_46cc_99e4_f33aed754ef8.slice. Mar 13 00:58:16.298425 kubelet[2809]: I0313 00:58:16.298026 2809 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c521c696-45b4-490b-8992-2fc320e8df9b" path="/var/lib/kubelet/pods/c521c696-45b4-490b-8992-2fc320e8df9b/volumes" Mar 13 00:58:16.316942 kubelet[2809]: I0313 00:58:16.316781 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/73e04890-5e9f-46cc-99e4-f33aed754ef8-whisker-backend-key-pair\") pod \"whisker-6c74b49d45-dc2wf\" (UID: \"73e04890-5e9f-46cc-99e4-f33aed754ef8\") " pod="calico-system/whisker-6c74b49d45-dc2wf" Mar 13 00:58:16.316942 kubelet[2809]: I0313 00:58:16.316818 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f5km\" (UniqueName: \"kubernetes.io/projected/73e04890-5e9f-46cc-99e4-f33aed754ef8-kube-api-access-8f5km\") pod \"whisker-6c74b49d45-dc2wf\" (UID: \"73e04890-5e9f-46cc-99e4-f33aed754ef8\") " pod="calico-system/whisker-6c74b49d45-dc2wf" Mar 13 00:58:16.316942 kubelet[2809]: I0313 00:58:16.316835 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/73e04890-5e9f-46cc-99e4-f33aed754ef8-nginx-config\") pod \"whisker-6c74b49d45-dc2wf\" (UID: \"73e04890-5e9f-46cc-99e4-f33aed754ef8\") " pod="calico-system/whisker-6c74b49d45-dc2wf" Mar 13 00:58:16.316942 kubelet[2809]: I0313 00:58:16.316850 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73e04890-5e9f-46cc-99e4-f33aed754ef8-whisker-ca-bundle\") pod \"whisker-6c74b49d45-dc2wf\" (UID: \"73e04890-5e9f-46cc-99e4-f33aed754ef8\") " pod="calico-system/whisker-6c74b49d45-dc2wf" Mar 13 00:58:16.609131 containerd[1589]: time="2026-03-13T00:58:16.608980906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c74b49d45-dc2wf,Uid:73e04890-5e9f-46cc-99e4-f33aed754ef8,Namespace:calico-system,Attempt:0,}" Mar 13 00:58:16.976791 containerd[1589]: time="2026-03-13T00:58:16.975350556Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:58:16.979466 containerd[1589]: time="2026-03-13T00:58:16.979088873Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 13 00:58:16.983329 containerd[1589]: time="2026-03-13T00:58:16.983073188Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:58:16.993201 containerd[1589]: time="2026-03-13T00:58:16.991945311Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:58:17.006256 containerd[1589]: time="2026-03-13T00:58:17.006115692Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.166375892s" Mar 13 00:58:17.006256 containerd[1589]: time="2026-03-13T00:58:17.006225676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 13 00:58:17.038361 containerd[1589]: time="2026-03-13T00:58:17.037432448Z" level=info msg="CreateContainer within sandbox \"5327d21aae97973b75391114706b3c491eddfeb0e04ef9aa685232a6b5ed2fe2\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 13 00:58:17.085872 containerd[1589]: time="2026-03-13T00:58:17.085834500Z" level=info msg="Container d7d66475958a7237fa0ad0b9e2effb501c117b023f478fe4bfb8173f74581eac: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:58:17.110665 containerd[1589]: time="2026-03-13T00:58:17.110245010Z" level=info msg="CreateContainer within sandbox \"5327d21aae97973b75391114706b3c491eddfeb0e04ef9aa685232a6b5ed2fe2\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d7d66475958a7237fa0ad0b9e2effb501c117b023f478fe4bfb8173f74581eac\"" Mar 13 00:58:17.112808 containerd[1589]: time="2026-03-13T00:58:17.112339356Z" level=info msg="StartContainer for \"d7d66475958a7237fa0ad0b9e2effb501c117b023f478fe4bfb8173f74581eac\"" Mar 13 00:58:17.114678 containerd[1589]: time="2026-03-13T00:58:17.114292803Z" level=info msg="connecting to shim d7d66475958a7237fa0ad0b9e2effb501c117b023f478fe4bfb8173f74581eac" address="unix:///run/containerd/s/1faf15f20815b81b68a9505d18fe53aca0a3fc5f7e0e5933c4f037d8ece1e8e3" protocol=ttrpc version=3 Mar 13 00:58:17.175304 systemd-networkd[1444]: cali302467ae981: Link UP Mar 13 00:58:17.177020 systemd-networkd[1444]: cali302467ae981: Gained carrier Mar 13 00:58:17.182852 systemd[1]: Started cri-containerd-d7d66475958a7237fa0ad0b9e2effb501c117b023f478fe4bfb8173f74581eac.scope - libcontainer container d7d66475958a7237fa0ad0b9e2effb501c117b023f478fe4bfb8173f74581eac. Mar 13 00:58:17.239782 containerd[1589]: 2026-03-13 00:58:16.756 [ERROR][4139] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 13 00:58:17.239782 containerd[1589]: 2026-03-13 00:58:16.795 [INFO][4139] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6c74b49d45--dc2wf-eth0 whisker-6c74b49d45- calico-system 73e04890-5e9f-46cc-99e4-f33aed754ef8 958 0 2026-03-13 00:58:16 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6c74b49d45 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6c74b49d45-dc2wf eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali302467ae981 [] [] }} ContainerID="c77731448da1410a4972d8024faf92f8bcf7c5cf4105e5c6b9edee670da930af" Namespace="calico-system" Pod="whisker-6c74b49d45-dc2wf" WorkloadEndpoint="localhost-k8s-whisker--6c74b49d45--dc2wf-" Mar 13 00:58:17.239782 containerd[1589]: 2026-03-13 00:58:16.797 [INFO][4139] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c77731448da1410a4972d8024faf92f8bcf7c5cf4105e5c6b9edee670da930af" Namespace="calico-system" Pod="whisker-6c74b49d45-dc2wf" WorkloadEndpoint="localhost-k8s-whisker--6c74b49d45--dc2wf-eth0" Mar 13 00:58:17.239782 containerd[1589]: 2026-03-13 00:58:16.925 [INFO][4173] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c77731448da1410a4972d8024faf92f8bcf7c5cf4105e5c6b9edee670da930af" HandleID="k8s-pod-network.c77731448da1410a4972d8024faf92f8bcf7c5cf4105e5c6b9edee670da930af" Workload="localhost-k8s-whisker--6c74b49d45--dc2wf-eth0" Mar 13 00:58:17.240102 containerd[1589]: 2026-03-13 00:58:16.943 [INFO][4173] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c77731448da1410a4972d8024faf92f8bcf7c5cf4105e5c6b9edee670da930af" HandleID="k8s-pod-network.c77731448da1410a4972d8024faf92f8bcf7c5cf4105e5c6b9edee670da930af" Workload="localhost-k8s-whisker--6c74b49d45--dc2wf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e4a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6c74b49d45-dc2wf", "timestamp":"2026-03-13 00:58:16.925271296 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00011ef20)} Mar 13 00:58:17.240102 containerd[1589]: 2026-03-13 00:58:16.943 [INFO][4173] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:58:17.240102 containerd[1589]: 2026-03-13 00:58:16.943 [INFO][4173] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:58:17.240102 containerd[1589]: 2026-03-13 00:58:16.943 [INFO][4173] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 13 00:58:17.240102 containerd[1589]: 2026-03-13 00:58:16.951 [INFO][4173] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c77731448da1410a4972d8024faf92f8bcf7c5cf4105e5c6b9edee670da930af" host="localhost" Mar 13 00:58:17.240102 containerd[1589]: 2026-03-13 00:58:16.970 [INFO][4173] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 13 00:58:17.240102 containerd[1589]: 2026-03-13 00:58:16.986 [INFO][4173] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 13 00:58:17.240102 containerd[1589]: 2026-03-13 00:58:17.005 [INFO][4173] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 13 00:58:17.240102 containerd[1589]: 2026-03-13 00:58:17.027 [INFO][4173] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 13 00:58:17.240102 containerd[1589]: 2026-03-13 00:58:17.034 [INFO][4173] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c77731448da1410a4972d8024faf92f8bcf7c5cf4105e5c6b9edee670da930af" host="localhost" Mar 13 00:58:17.240875 containerd[1589]: 2026-03-13 00:58:17.052 [INFO][4173] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c77731448da1410a4972d8024faf92f8bcf7c5cf4105e5c6b9edee670da930af Mar 13 00:58:17.240875 containerd[1589]: 2026-03-13 00:58:17.076 [INFO][4173] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c77731448da1410a4972d8024faf92f8bcf7c5cf4105e5c6b9edee670da930af" host="localhost" Mar 13 00:58:17.240875 containerd[1589]: 2026-03-13 00:58:17.099 [INFO][4173] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.c77731448da1410a4972d8024faf92f8bcf7c5cf4105e5c6b9edee670da930af" host="localhost" Mar 13 00:58:17.240875 containerd[1589]: 2026-03-13 00:58:17.101 [INFO][4173] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.c77731448da1410a4972d8024faf92f8bcf7c5cf4105e5c6b9edee670da930af" host="localhost" Mar 13 00:58:17.240875 containerd[1589]: 2026-03-13 00:58:17.101 [INFO][4173] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:58:17.240875 containerd[1589]: 2026-03-13 00:58:17.101 [INFO][4173] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="c77731448da1410a4972d8024faf92f8bcf7c5cf4105e5c6b9edee670da930af" HandleID="k8s-pod-network.c77731448da1410a4972d8024faf92f8bcf7c5cf4105e5c6b9edee670da930af" Workload="localhost-k8s-whisker--6c74b49d45--dc2wf-eth0" Mar 13 00:58:17.241082 containerd[1589]: 2026-03-13 00:58:17.118 [INFO][4139] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c77731448da1410a4972d8024faf92f8bcf7c5cf4105e5c6b9edee670da930af" Namespace="calico-system" Pod="whisker-6c74b49d45-dc2wf" WorkloadEndpoint="localhost-k8s-whisker--6c74b49d45--dc2wf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6c74b49d45--dc2wf-eth0", GenerateName:"whisker-6c74b49d45-", Namespace:"calico-system", SelfLink:"", UID:"73e04890-5e9f-46cc-99e4-f33aed754ef8", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 58, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c74b49d45", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6c74b49d45-dc2wf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali302467ae981", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:58:17.241082 containerd[1589]: 2026-03-13 00:58:17.119 [INFO][4139] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="c77731448da1410a4972d8024faf92f8bcf7c5cf4105e5c6b9edee670da930af" Namespace="calico-system" Pod="whisker-6c74b49d45-dc2wf" WorkloadEndpoint="localhost-k8s-whisker--6c74b49d45--dc2wf-eth0" Mar 13 00:58:17.241328 containerd[1589]: 2026-03-13 00:58:17.120 [INFO][4139] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali302467ae981 ContainerID="c77731448da1410a4972d8024faf92f8bcf7c5cf4105e5c6b9edee670da930af" Namespace="calico-system" Pod="whisker-6c74b49d45-dc2wf" WorkloadEndpoint="localhost-k8s-whisker--6c74b49d45--dc2wf-eth0" Mar 13 00:58:17.241328 containerd[1589]: 2026-03-13 00:58:17.180 [INFO][4139] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c77731448da1410a4972d8024faf92f8bcf7c5cf4105e5c6b9edee670da930af" Namespace="calico-system" Pod="whisker-6c74b49d45-dc2wf" WorkloadEndpoint="localhost-k8s-whisker--6c74b49d45--dc2wf-eth0" Mar 13 00:58:17.241444 containerd[1589]: 2026-03-13 00:58:17.186 [INFO][4139] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c77731448da1410a4972d8024faf92f8bcf7c5cf4105e5c6b9edee670da930af" Namespace="calico-system" Pod="whisker-6c74b49d45-dc2wf" WorkloadEndpoint="localhost-k8s-whisker--6c74b49d45--dc2wf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6c74b49d45--dc2wf-eth0", GenerateName:"whisker-6c74b49d45-", Namespace:"calico-system", SelfLink:"", UID:"73e04890-5e9f-46cc-99e4-f33aed754ef8", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 58, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c74b49d45", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c77731448da1410a4972d8024faf92f8bcf7c5cf4105e5c6b9edee670da930af", Pod:"whisker-6c74b49d45-dc2wf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali302467ae981", MAC:"aa:df:5c:01:cb:2d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:58:17.241748 containerd[1589]: 2026-03-13 00:58:17.217 [INFO][4139] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c77731448da1410a4972d8024faf92f8bcf7c5cf4105e5c6b9edee670da930af" Namespace="calico-system" Pod="whisker-6c74b49d45-dc2wf" WorkloadEndpoint="localhost-k8s-whisker--6c74b49d45--dc2wf-eth0" Mar 13 00:58:17.341405 containerd[1589]: time="2026-03-13T00:58:17.341285711Z" level=info msg="connecting to shim c77731448da1410a4972d8024faf92f8bcf7c5cf4105e5c6b9edee670da930af" address="unix:///run/containerd/s/e5cb225162c743767972fd263877f2592ff8716c3597d41ca119500745eda686" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:58:17.428311 containerd[1589]: time="2026-03-13T00:58:17.426741550Z" level=info msg="StartContainer for \"d7d66475958a7237fa0ad0b9e2effb501c117b023f478fe4bfb8173f74581eac\" returns successfully" Mar 13 00:58:17.431459 containerd[1589]: time="2026-03-13T00:58:17.431062735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 13 00:58:17.432243 systemd[1]: Started cri-containerd-c77731448da1410a4972d8024faf92f8bcf7c5cf4105e5c6b9edee670da930af.scope - libcontainer container c77731448da1410a4972d8024faf92f8bcf7c5cf4105e5c6b9edee670da930af. Mar 13 00:58:17.486435 systemd-resolved[1447]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 13 00:58:17.598669 containerd[1589]: time="2026-03-13T00:58:17.595966048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c74b49d45-dc2wf,Uid:73e04890-5e9f-46cc-99e4-f33aed754ef8,Namespace:calico-system,Attempt:0,} returns sandbox id \"c77731448da1410a4972d8024faf92f8bcf7c5cf4105e5c6b9edee670da930af\"" Mar 13 00:58:17.617039 systemd-networkd[1444]: calibe32ebd45af: Gained IPv6LL Mar 13 00:58:18.425695 systemd-networkd[1444]: vxlan.calico: Link UP Mar 13 00:58:18.425705 systemd-networkd[1444]: vxlan.calico: Gained carrier Mar 13 00:58:18.577018 systemd-networkd[1444]: cali302467ae981: Gained IPv6LL Mar 13 00:58:18.594164 containerd[1589]: time="2026-03-13T00:58:18.593986324Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:58:18.597180 containerd[1589]: time="2026-03-13T00:58:18.597083042Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 13 00:58:18.599181 containerd[1589]: time="2026-03-13T00:58:18.599153510Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:58:18.606331 containerd[1589]: time="2026-03-13T00:58:18.606269397Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:58:18.606818 containerd[1589]: time="2026-03-13T00:58:18.606444621Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.17528245s" Mar 13 00:58:18.606818 containerd[1589]: time="2026-03-13T00:58:18.606662405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 13 00:58:18.609974 containerd[1589]: time="2026-03-13T00:58:18.609712900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 13 00:58:18.652677 containerd[1589]: time="2026-03-13T00:58:18.621725360Z" level=info msg="CreateContainer within sandbox \"5327d21aae97973b75391114706b3c491eddfeb0e04ef9aa685232a6b5ed2fe2\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 13 00:58:18.672869 containerd[1589]: time="2026-03-13T00:58:18.672836561Z" level=info msg="Container b30f106f7562e8f9f0b5e6aa197c43eda0e85b329e99333ec26dfc8ec06a3406: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:58:18.690850 containerd[1589]: time="2026-03-13T00:58:18.690113294Z" level=info msg="CreateContainer within sandbox \"5327d21aae97973b75391114706b3c491eddfeb0e04ef9aa685232a6b5ed2fe2\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b30f106f7562e8f9f0b5e6aa197c43eda0e85b329e99333ec26dfc8ec06a3406\"" Mar 13 00:58:18.693241 containerd[1589]: time="2026-03-13T00:58:18.692876670Z" level=info msg="StartContainer for \"b30f106f7562e8f9f0b5e6aa197c43eda0e85b329e99333ec26dfc8ec06a3406\"" Mar 13 00:58:18.697923 containerd[1589]: time="2026-03-13T00:58:18.697899895Z" level=info msg="connecting to shim b30f106f7562e8f9f0b5e6aa197c43eda0e85b329e99333ec26dfc8ec06a3406" address="unix:///run/containerd/s/1faf15f20815b81b68a9505d18fe53aca0a3fc5f7e0e5933c4f037d8ece1e8e3" protocol=ttrpc version=3 Mar 13 00:58:18.748013 systemd[1]: Started cri-containerd-b30f106f7562e8f9f0b5e6aa197c43eda0e85b329e99333ec26dfc8ec06a3406.scope - libcontainer container b30f106f7562e8f9f0b5e6aa197c43eda0e85b329e99333ec26dfc8ec06a3406. Mar 13 00:58:18.929210 containerd[1589]: time="2026-03-13T00:58:18.928437798Z" level=info msg="StartContainer for \"b30f106f7562e8f9f0b5e6aa197c43eda0e85b329e99333ec26dfc8ec06a3406\" returns successfully" Mar 13 00:58:19.380216 containerd[1589]: time="2026-03-13T00:58:19.379960696Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:58:19.387907 containerd[1589]: time="2026-03-13T00:58:19.382134993Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 13 00:58:19.387907 containerd[1589]: time="2026-03-13T00:58:19.383965602Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:58:19.388159 containerd[1589]: time="2026-03-13T00:58:19.388086362Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 778.278306ms" Mar 13 00:58:19.388159 containerd[1589]: time="2026-03-13T00:58:19.388121839Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 13 00:58:19.388387 containerd[1589]: time="2026-03-13T00:58:19.388355060Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:58:19.397710 containerd[1589]: time="2026-03-13T00:58:19.396915705Z" level=info msg="CreateContainer within sandbox \"c77731448da1410a4972d8024faf92f8bcf7c5cf4105e5c6b9edee670da930af\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 13 00:58:19.413196 containerd[1589]: time="2026-03-13T00:58:19.412777069Z" level=info msg="Container 48b19f6269741bb99779ad581ad463506c96f7b413dc31d7dd4102fde8486b34: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:58:19.427086 containerd[1589]: time="2026-03-13T00:58:19.426882910Z" level=info msg="CreateContainer within sandbox \"c77731448da1410a4972d8024faf92f8bcf7c5cf4105e5c6b9edee670da930af\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"48b19f6269741bb99779ad581ad463506c96f7b413dc31d7dd4102fde8486b34\"" Mar 13 00:58:19.429019 containerd[1589]: time="2026-03-13T00:58:19.428759664Z" level=info msg="StartContainer for \"48b19f6269741bb99779ad581ad463506c96f7b413dc31d7dd4102fde8486b34\"" Mar 13 00:58:19.432667 containerd[1589]: time="2026-03-13T00:58:19.432400053Z" level=info msg="connecting to shim 48b19f6269741bb99779ad581ad463506c96f7b413dc31d7dd4102fde8486b34" address="unix:///run/containerd/s/e5cb225162c743767972fd263877f2592ff8716c3597d41ca119500745eda686" protocol=ttrpc version=3 Mar 13 00:58:19.475823 systemd[1]: Started cri-containerd-48b19f6269741bb99779ad581ad463506c96f7b413dc31d7dd4102fde8486b34.scope - libcontainer container 48b19f6269741bb99779ad581ad463506c96f7b413dc31d7dd4102fde8486b34. Mar 13 00:58:19.578643 containerd[1589]: time="2026-03-13T00:58:19.577743823Z" level=info msg="StartContainer for \"48b19f6269741bb99779ad581ad463506c96f7b413dc31d7dd4102fde8486b34\" returns successfully" Mar 13 00:58:19.582158 containerd[1589]: time="2026-03-13T00:58:19.582035792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 13 00:58:19.792841 systemd-networkd[1444]: vxlan.calico: Gained IPv6LL Mar 13 00:58:19.797360 kubelet[2809]: I0313 00:58:19.797145 2809 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 13 00:58:19.799272 kubelet[2809]: I0313 00:58:19.798932 2809 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 13 00:58:20.630123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1163155434.mount: Deactivated successfully. Mar 13 00:58:20.671375 containerd[1589]: time="2026-03-13T00:58:20.671311716Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:58:20.673706 containerd[1589]: time="2026-03-13T00:58:20.673538302Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 13 00:58:20.685212 containerd[1589]: time="2026-03-13T00:58:20.685027881Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:58:20.686938 containerd[1589]: time="2026-03-13T00:58:20.686338168Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.104245882s" Mar 13 00:58:20.686938 containerd[1589]: time="2026-03-13T00:58:20.686368804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 13 00:58:20.687184 containerd[1589]: time="2026-03-13T00:58:20.687141740Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:58:20.699404 containerd[1589]: time="2026-03-13T00:58:20.699187098Z" level=info msg="CreateContainer within sandbox \"c77731448da1410a4972d8024faf92f8bcf7c5cf4105e5c6b9edee670da930af\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 13 00:58:20.715960 containerd[1589]: time="2026-03-13T00:58:20.715822102Z" level=info msg="Container b05463e9df513b29651e70ab05b11d01ca7ef3843ca8e79809f36a25a88b1dec: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:58:20.731454 containerd[1589]: time="2026-03-13T00:58:20.731254589Z" level=info msg="CreateContainer within sandbox \"c77731448da1410a4972d8024faf92f8bcf7c5cf4105e5c6b9edee670da930af\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"b05463e9df513b29651e70ab05b11d01ca7ef3843ca8e79809f36a25a88b1dec\"" Mar 13 00:58:20.732845 containerd[1589]: time="2026-03-13T00:58:20.732432707Z" level=info msg="StartContainer for \"b05463e9df513b29651e70ab05b11d01ca7ef3843ca8e79809f36a25a88b1dec\"" Mar 13 00:58:20.735273 containerd[1589]: time="2026-03-13T00:58:20.735237425Z" level=info msg="connecting to shim b05463e9df513b29651e70ab05b11d01ca7ef3843ca8e79809f36a25a88b1dec" address="unix:///run/containerd/s/e5cb225162c743767972fd263877f2592ff8716c3597d41ca119500745eda686" protocol=ttrpc version=3 Mar 13 00:58:20.774767 systemd[1]: Started cri-containerd-b05463e9df513b29651e70ab05b11d01ca7ef3843ca8e79809f36a25a88b1dec.scope - libcontainer container b05463e9df513b29651e70ab05b11d01ca7ef3843ca8e79809f36a25a88b1dec. Mar 13 00:58:20.944416 containerd[1589]: time="2026-03-13T00:58:20.944153807Z" level=info msg="StartContainer for \"b05463e9df513b29651e70ab05b11d01ca7ef3843ca8e79809f36a25a88b1dec\" returns successfully" Mar 13 00:58:21.214238 kubelet[2809]: I0313 00:58:21.213685 2809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-f5h27" podStartSLOduration=30.443396357 podStartE2EDuration="33.213660465s" podCreationTimestamp="2026-03-13 00:57:48 +0000 UTC" firstStartedPulling="2026-03-13 00:58:15.839198078 +0000 UTC m=+51.875592977" lastFinishedPulling="2026-03-13 00:58:18.609462184 +0000 UTC m=+54.645857085" observedRunningTime="2026-03-13 00:58:19.206156941 +0000 UTC m=+55.242551840" watchObservedRunningTime="2026-03-13 00:58:21.213660465 +0000 UTC m=+57.250055365" Mar 13 00:58:21.214238 kubelet[2809]: I0313 00:58:21.213943 2809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6c74b49d45-dc2wf" podStartSLOduration=2.127037368 podStartE2EDuration="5.213935877s" podCreationTimestamp="2026-03-13 00:58:16 +0000 UTC" firstStartedPulling="2026-03-13 00:58:17.601551186 +0000 UTC m=+53.637946085" lastFinishedPulling="2026-03-13 00:58:20.688449694 +0000 UTC m=+56.724844594" observedRunningTime="2026-03-13 00:58:21.213054275 +0000 UTC m=+57.249449175" watchObservedRunningTime="2026-03-13 00:58:21.213935877 +0000 UTC m=+57.250330797" Mar 13 00:58:25.263246 containerd[1589]: time="2026-03-13T00:58:25.263133217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-855966fd9b-8t47p,Uid:ecbde260-ff56-4143-b201-3e59a89f22c5,Namespace:calico-system,Attempt:0,}" Mar 13 00:58:25.265683 containerd[1589]: time="2026-03-13T00:58:25.265632329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-f8bbj,Uid:d33ce537-d1c4-4ba9-ad27-c83c37a2ac38,Namespace:kube-system,Attempt:0,}" Mar 13 00:58:25.412767 systemd-networkd[1444]: calib2147ed6fbf: Link UP Mar 13 00:58:25.413597 systemd-networkd[1444]: calib2147ed6fbf: Gained carrier Mar 13 00:58:25.434118 containerd[1589]: 2026-03-13 00:58:25.321 [INFO][4558] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--f8bbj-eth0 coredns-66bc5c9577- kube-system d33ce537-d1c4-4ba9-ad27-c83c37a2ac38 893 0 2026-03-13 00:57:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-f8bbj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib2147ed6fbf [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="ce281d75cc384fefc2f066c4183b98881a81bcef26e7da510f232970898d17dd" Namespace="kube-system" Pod="coredns-66bc5c9577-f8bbj" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--f8bbj-" Mar 13 00:58:25.434118 containerd[1589]: 2026-03-13 00:58:25.321 [INFO][4558] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ce281d75cc384fefc2f066c4183b98881a81bcef26e7da510f232970898d17dd" Namespace="kube-system" Pod="coredns-66bc5c9577-f8bbj" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--f8bbj-eth0" Mar 13 00:58:25.434118 containerd[1589]: 2026-03-13 00:58:25.358 [INFO][4582] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ce281d75cc384fefc2f066c4183b98881a81bcef26e7da510f232970898d17dd" HandleID="k8s-pod-network.ce281d75cc384fefc2f066c4183b98881a81bcef26e7da510f232970898d17dd" Workload="localhost-k8s-coredns--66bc5c9577--f8bbj-eth0" Mar 13 00:58:25.434333 containerd[1589]: 2026-03-13 00:58:25.368 [INFO][4582] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ce281d75cc384fefc2f066c4183b98881a81bcef26e7da510f232970898d17dd" HandleID="k8s-pod-network.ce281d75cc384fefc2f066c4183b98881a81bcef26e7da510f232970898d17dd" Workload="localhost-k8s-coredns--66bc5c9577--f8bbj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004df430), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-f8bbj", "timestamp":"2026-03-13 00:58:25.358385209 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002031e0)} Mar 13 00:58:25.434333 containerd[1589]: 2026-03-13 00:58:25.368 [INFO][4582] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:58:25.434333 containerd[1589]: 2026-03-13 00:58:25.368 [INFO][4582] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:58:25.434333 containerd[1589]: 2026-03-13 00:58:25.368 [INFO][4582] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 13 00:58:25.434333 containerd[1589]: 2026-03-13 00:58:25.372 [INFO][4582] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ce281d75cc384fefc2f066c4183b98881a81bcef26e7da510f232970898d17dd" host="localhost" Mar 13 00:58:25.434333 containerd[1589]: 2026-03-13 00:58:25.379 [INFO][4582] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 13 00:58:25.434333 containerd[1589]: 2026-03-13 00:58:25.386 [INFO][4582] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 13 00:58:25.434333 containerd[1589]: 2026-03-13 00:58:25.388 [INFO][4582] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 13 00:58:25.434333 containerd[1589]: 2026-03-13 00:58:25.391 [INFO][4582] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 13 00:58:25.434333 containerd[1589]: 2026-03-13 00:58:25.391 [INFO][4582] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ce281d75cc384fefc2f066c4183b98881a81bcef26e7da510f232970898d17dd" host="localhost" Mar 13 00:58:25.434951 containerd[1589]: 2026-03-13 00:58:25.393 [INFO][4582] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ce281d75cc384fefc2f066c4183b98881a81bcef26e7da510f232970898d17dd Mar 13 00:58:25.434951 containerd[1589]: 2026-03-13 00:58:25.399 [INFO][4582] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ce281d75cc384fefc2f066c4183b98881a81bcef26e7da510f232970898d17dd" host="localhost" Mar 13 00:58:25.434951 containerd[1589]: 2026-03-13 00:58:25.406 [INFO][4582] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.ce281d75cc384fefc2f066c4183b98881a81bcef26e7da510f232970898d17dd" host="localhost" Mar 13 00:58:25.434951 containerd[1589]: 2026-03-13 00:58:25.406 [INFO][4582] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.ce281d75cc384fefc2f066c4183b98881a81bcef26e7da510f232970898d17dd" host="localhost" Mar 13 00:58:25.434951 containerd[1589]: 2026-03-13 00:58:25.406 [INFO][4582] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:58:25.434951 containerd[1589]: 2026-03-13 00:58:25.406 [INFO][4582] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="ce281d75cc384fefc2f066c4183b98881a81bcef26e7da510f232970898d17dd" HandleID="k8s-pod-network.ce281d75cc384fefc2f066c4183b98881a81bcef26e7da510f232970898d17dd" Workload="localhost-k8s-coredns--66bc5c9577--f8bbj-eth0" Mar 13 00:58:25.435077 containerd[1589]: 2026-03-13 00:58:25.409 [INFO][4558] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ce281d75cc384fefc2f066c4183b98881a81bcef26e7da510f232970898d17dd" Namespace="kube-system" Pod="coredns-66bc5c9577-f8bbj" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--f8bbj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--f8bbj-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d33ce537-d1c4-4ba9-ad27-c83c37a2ac38", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 57, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-f8bbj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib2147ed6fbf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:58:25.435077 containerd[1589]: 2026-03-13 00:58:25.409 [INFO][4558] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="ce281d75cc384fefc2f066c4183b98881a81bcef26e7da510f232970898d17dd" Namespace="kube-system" Pod="coredns-66bc5c9577-f8bbj" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--f8bbj-eth0" Mar 13 00:58:25.435077 containerd[1589]: 2026-03-13 00:58:25.409 [INFO][4558] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib2147ed6fbf ContainerID="ce281d75cc384fefc2f066c4183b98881a81bcef26e7da510f232970898d17dd" Namespace="kube-system" Pod="coredns-66bc5c9577-f8bbj" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--f8bbj-eth0" Mar 13 00:58:25.435077 containerd[1589]: 2026-03-13 00:58:25.414 [INFO][4558] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ce281d75cc384fefc2f066c4183b98881a81bcef26e7da510f232970898d17dd" Namespace="kube-system" Pod="coredns-66bc5c9577-f8bbj" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--f8bbj-eth0" Mar 13 00:58:25.435077 containerd[1589]: 2026-03-13 00:58:25.415 [INFO][4558] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ce281d75cc384fefc2f066c4183b98881a81bcef26e7da510f232970898d17dd" Namespace="kube-system" Pod="coredns-66bc5c9577-f8bbj" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--f8bbj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--f8bbj-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d33ce537-d1c4-4ba9-ad27-c83c37a2ac38", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 57, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ce281d75cc384fefc2f066c4183b98881a81bcef26e7da510f232970898d17dd", Pod:"coredns-66bc5c9577-f8bbj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib2147ed6fbf", MAC:"ce:25:87:04:97:78", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:58:25.435077 containerd[1589]: 2026-03-13 00:58:25.427 [INFO][4558] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ce281d75cc384fefc2f066c4183b98881a81bcef26e7da510f232970898d17dd" Namespace="kube-system" Pod="coredns-66bc5c9577-f8bbj" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--f8bbj-eth0" Mar 13 00:58:25.463087 containerd[1589]: time="2026-03-13T00:58:25.463002560Z" level=info msg="connecting to shim ce281d75cc384fefc2f066c4183b98881a81bcef26e7da510f232970898d17dd" address="unix:///run/containerd/s/b00c4c3314b269b33df2a1cbabca52fee8d82bc5e9c232cec1fced8dc5b7a596" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:58:25.504793 systemd[1]: Started cri-containerd-ce281d75cc384fefc2f066c4183b98881a81bcef26e7da510f232970898d17dd.scope - libcontainer container ce281d75cc384fefc2f066c4183b98881a81bcef26e7da510f232970898d17dd. Mar 13 00:58:25.528689 systemd-resolved[1447]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 13 00:58:25.538783 systemd-networkd[1444]: cali5d9160413e3: Link UP Mar 13 00:58:25.540363 systemd-networkd[1444]: cali5d9160413e3: Gained carrier Mar 13 00:58:25.567252 containerd[1589]: 2026-03-13 00:58:25.320 [INFO][4553] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--855966fd9b--8t47p-eth0 calico-apiserver-855966fd9b- calico-system ecbde260-ff56-4143-b201-3e59a89f22c5 895 0 2026-03-13 00:57:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:855966fd9b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-855966fd9b-8t47p eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali5d9160413e3 [] [] }} ContainerID="f68dddbe85f8737325e90a937fc8bb36e2894ed8e4f597fe2648bef74e028980" Namespace="calico-system" Pod="calico-apiserver-855966fd9b-8t47p" WorkloadEndpoint="localhost-k8s-calico--apiserver--855966fd9b--8t47p-" Mar 13 00:58:25.567252 containerd[1589]: 2026-03-13 00:58:25.321 [INFO][4553] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f68dddbe85f8737325e90a937fc8bb36e2894ed8e4f597fe2648bef74e028980" Namespace="calico-system" Pod="calico-apiserver-855966fd9b-8t47p" WorkloadEndpoint="localhost-k8s-calico--apiserver--855966fd9b--8t47p-eth0" Mar 13 00:58:25.567252 containerd[1589]: 2026-03-13 00:58:25.358 [INFO][4580] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f68dddbe85f8737325e90a937fc8bb36e2894ed8e4f597fe2648bef74e028980" HandleID="k8s-pod-network.f68dddbe85f8737325e90a937fc8bb36e2894ed8e4f597fe2648bef74e028980" Workload="localhost-k8s-calico--apiserver--855966fd9b--8t47p-eth0" Mar 13 00:58:25.567252 containerd[1589]: 2026-03-13 00:58:25.370 [INFO][4580] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f68dddbe85f8737325e90a937fc8bb36e2894ed8e4f597fe2648bef74e028980" HandleID="k8s-pod-network.f68dddbe85f8737325e90a937fc8bb36e2894ed8e4f597fe2648bef74e028980" Workload="localhost-k8s-calico--apiserver--855966fd9b--8t47p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000582f10), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-855966fd9b-8t47p", "timestamp":"2026-03-13 00:58:25.358895507 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000fe2c0)} Mar 13 00:58:25.567252 containerd[1589]: 2026-03-13 00:58:25.370 [INFO][4580] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:58:25.567252 containerd[1589]: 2026-03-13 00:58:25.406 [INFO][4580] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:58:25.567252 containerd[1589]: 2026-03-13 00:58:25.406 [INFO][4580] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 13 00:58:25.567252 containerd[1589]: 2026-03-13 00:58:25.478 [INFO][4580] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f68dddbe85f8737325e90a937fc8bb36e2894ed8e4f597fe2648bef74e028980" host="localhost" Mar 13 00:58:25.567252 containerd[1589]: 2026-03-13 00:58:25.487 [INFO][4580] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 13 00:58:25.567252 containerd[1589]: 2026-03-13 00:58:25.496 [INFO][4580] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 13 00:58:25.567252 containerd[1589]: 2026-03-13 00:58:25.502 [INFO][4580] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 13 00:58:25.567252 containerd[1589]: 2026-03-13 00:58:25.507 [INFO][4580] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 13 00:58:25.567252 containerd[1589]: 2026-03-13 00:58:25.507 [INFO][4580] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f68dddbe85f8737325e90a937fc8bb36e2894ed8e4f597fe2648bef74e028980" host="localhost" Mar 13 00:58:25.567252 containerd[1589]: 2026-03-13 00:58:25.510 [INFO][4580] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f68dddbe85f8737325e90a937fc8bb36e2894ed8e4f597fe2648bef74e028980 Mar 13 00:58:25.567252 containerd[1589]: 2026-03-13 00:58:25.516 [INFO][4580] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f68dddbe85f8737325e90a937fc8bb36e2894ed8e4f597fe2648bef74e028980" host="localhost" Mar 13 00:58:25.567252 containerd[1589]: 2026-03-13 00:58:25.524 [INFO][4580] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.f68dddbe85f8737325e90a937fc8bb36e2894ed8e4f597fe2648bef74e028980" host="localhost" Mar 13 00:58:25.567252 containerd[1589]: 2026-03-13 00:58:25.524 [INFO][4580] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.f68dddbe85f8737325e90a937fc8bb36e2894ed8e4f597fe2648bef74e028980" host="localhost" Mar 13 00:58:25.567252 containerd[1589]: 2026-03-13 00:58:25.524 [INFO][4580] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:58:25.567252 containerd[1589]: 2026-03-13 00:58:25.524 [INFO][4580] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="f68dddbe85f8737325e90a937fc8bb36e2894ed8e4f597fe2648bef74e028980" HandleID="k8s-pod-network.f68dddbe85f8737325e90a937fc8bb36e2894ed8e4f597fe2648bef74e028980" Workload="localhost-k8s-calico--apiserver--855966fd9b--8t47p-eth0" Mar 13 00:58:25.567926 containerd[1589]: 2026-03-13 00:58:25.530 [INFO][4553] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f68dddbe85f8737325e90a937fc8bb36e2894ed8e4f597fe2648bef74e028980" Namespace="calico-system" Pod="calico-apiserver-855966fd9b-8t47p" WorkloadEndpoint="localhost-k8s-calico--apiserver--855966fd9b--8t47p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--855966fd9b--8t47p-eth0", GenerateName:"calico-apiserver-855966fd9b-", Namespace:"calico-system", SelfLink:"", UID:"ecbde260-ff56-4143-b201-3e59a89f22c5", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 57, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"855966fd9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-855966fd9b-8t47p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali5d9160413e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:58:25.567926 containerd[1589]: 2026-03-13 00:58:25.531 [INFO][4553] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="f68dddbe85f8737325e90a937fc8bb36e2894ed8e4f597fe2648bef74e028980" Namespace="calico-system" Pod="calico-apiserver-855966fd9b-8t47p" WorkloadEndpoint="localhost-k8s-calico--apiserver--855966fd9b--8t47p-eth0" Mar 13 00:58:25.567926 containerd[1589]: 2026-03-13 00:58:25.531 [INFO][4553] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5d9160413e3 ContainerID="f68dddbe85f8737325e90a937fc8bb36e2894ed8e4f597fe2648bef74e028980" Namespace="calico-system" Pod="calico-apiserver-855966fd9b-8t47p" WorkloadEndpoint="localhost-k8s-calico--apiserver--855966fd9b--8t47p-eth0" Mar 13 00:58:25.567926 containerd[1589]: 2026-03-13 00:58:25.541 [INFO][4553] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f68dddbe85f8737325e90a937fc8bb36e2894ed8e4f597fe2648bef74e028980" Namespace="calico-system" Pod="calico-apiserver-855966fd9b-8t47p" WorkloadEndpoint="localhost-k8s-calico--apiserver--855966fd9b--8t47p-eth0" Mar 13 00:58:25.567926 containerd[1589]: 2026-03-13 00:58:25.543 [INFO][4553] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f68dddbe85f8737325e90a937fc8bb36e2894ed8e4f597fe2648bef74e028980" Namespace="calico-system" Pod="calico-apiserver-855966fd9b-8t47p" WorkloadEndpoint="localhost-k8s-calico--apiserver--855966fd9b--8t47p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--855966fd9b--8t47p-eth0", GenerateName:"calico-apiserver-855966fd9b-", Namespace:"calico-system", SelfLink:"", UID:"ecbde260-ff56-4143-b201-3e59a89f22c5", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 57, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"855966fd9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f68dddbe85f8737325e90a937fc8bb36e2894ed8e4f597fe2648bef74e028980", Pod:"calico-apiserver-855966fd9b-8t47p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali5d9160413e3", MAC:"6a:41:2a:c3:dc:0c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:58:25.567926 containerd[1589]: 2026-03-13 00:58:25.559 [INFO][4553] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f68dddbe85f8737325e90a937fc8bb36e2894ed8e4f597fe2648bef74e028980" Namespace="calico-system" Pod="calico-apiserver-855966fd9b-8t47p" WorkloadEndpoint="localhost-k8s-calico--apiserver--855966fd9b--8t47p-eth0" Mar 13 00:58:25.582055 containerd[1589]: time="2026-03-13T00:58:25.581972602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-f8bbj,Uid:d33ce537-d1c4-4ba9-ad27-c83c37a2ac38,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce281d75cc384fefc2f066c4183b98881a81bcef26e7da510f232970898d17dd\"" Mar 13 00:58:25.590103 containerd[1589]: time="2026-03-13T00:58:25.589964852Z" level=info msg="CreateContainer within sandbox \"ce281d75cc384fefc2f066c4183b98881a81bcef26e7da510f232970898d17dd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 13 00:58:25.613291 containerd[1589]: time="2026-03-13T00:58:25.612770925Z" level=info msg="Container 9a52c6fcc305a20174b32d20945b436cea5311671ef1d2ddb5d2e28e18188fab: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:58:25.626839 containerd[1589]: time="2026-03-13T00:58:25.626736303Z" level=info msg="connecting to shim f68dddbe85f8737325e90a937fc8bb36e2894ed8e4f597fe2648bef74e028980" address="unix:///run/containerd/s/84e8617b1a8dfdc1103b23b2b8c103a8ae446fc34c89757a90cabbde57d2d15b" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:58:25.627501 containerd[1589]: time="2026-03-13T00:58:25.627342539Z" level=info msg="CreateContainer within sandbox \"ce281d75cc384fefc2f066c4183b98881a81bcef26e7da510f232970898d17dd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9a52c6fcc305a20174b32d20945b436cea5311671ef1d2ddb5d2e28e18188fab\"" Mar 13 00:58:25.630734 containerd[1589]: time="2026-03-13T00:58:25.630242577Z" level=info msg="StartContainer for \"9a52c6fcc305a20174b32d20945b436cea5311671ef1d2ddb5d2e28e18188fab\"" Mar 13 00:58:25.632106 containerd[1589]: time="2026-03-13T00:58:25.632000974Z" level=info msg="connecting to shim 9a52c6fcc305a20174b32d20945b436cea5311671ef1d2ddb5d2e28e18188fab" address="unix:///run/containerd/s/b00c4c3314b269b33df2a1cbabca52fee8d82bc5e9c232cec1fced8dc5b7a596" protocol=ttrpc version=3 Mar 13 00:58:25.670752 systemd[1]: Started cri-containerd-9a52c6fcc305a20174b32d20945b436cea5311671ef1d2ddb5d2e28e18188fab.scope - libcontainer container 9a52c6fcc305a20174b32d20945b436cea5311671ef1d2ddb5d2e28e18188fab. Mar 13 00:58:25.675865 systemd[1]: Started cri-containerd-f68dddbe85f8737325e90a937fc8bb36e2894ed8e4f597fe2648bef74e028980.scope - libcontainer container f68dddbe85f8737325e90a937fc8bb36e2894ed8e4f597fe2648bef74e028980. Mar 13 00:58:25.705655 systemd-resolved[1447]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 13 00:58:25.734712 containerd[1589]: time="2026-03-13T00:58:25.734259882Z" level=info msg="StartContainer for \"9a52c6fcc305a20174b32d20945b436cea5311671ef1d2ddb5d2e28e18188fab\" returns successfully" Mar 13 00:58:25.770428 containerd[1589]: time="2026-03-13T00:58:25.770295887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-855966fd9b-8t47p,Uid:ecbde260-ff56-4143-b201-3e59a89f22c5,Namespace:calico-system,Attempt:0,} returns sandbox id \"f68dddbe85f8737325e90a937fc8bb36e2894ed8e4f597fe2648bef74e028980\"" Mar 13 00:58:25.773094 containerd[1589]: time="2026-03-13T00:58:25.773044834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 13 00:58:26.239172 kubelet[2809]: I0313 00:58:26.238824 2809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-f8bbj" podStartSLOduration=58.238801114 podStartE2EDuration="58.238801114s" podCreationTimestamp="2026-03-13 00:57:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:58:26.235194151 +0000 UTC m=+62.271589061" watchObservedRunningTime="2026-03-13 00:58:26.238801114 +0000 UTC m=+62.275196014" Mar 13 00:58:26.282223 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1332055542.mount: Deactivated successfully. Mar 13 00:58:26.364698 containerd[1589]: time="2026-03-13T00:58:26.362616176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-x69q7,Uid:d9961465-6f87-4262-94c9-59f9582bd4cd,Namespace:kube-system,Attempt:0,}" Mar 13 00:58:26.368097 containerd[1589]: time="2026-03-13T00:58:26.368065422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-df2cr,Uid:c4871750-34d0-4aea-af63-b853a4756c22,Namespace:calico-system,Attempt:0,}" Mar 13 00:58:26.450059 systemd-networkd[1444]: calib2147ed6fbf: Gained IPv6LL Mar 13 00:58:26.546880 systemd-networkd[1444]: cali7ff40f24d6e: Link UP Mar 13 00:58:26.548265 systemd-networkd[1444]: cali7ff40f24d6e: Gained carrier Mar 13 00:58:26.565979 containerd[1589]: 2026-03-13 00:58:26.423 [INFO][4774] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--x69q7-eth0 coredns-66bc5c9577- kube-system d9961465-6f87-4262-94c9-59f9582bd4cd 889 0 2026-03-13 00:57:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-x69q7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7ff40f24d6e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="78adab0d8d252e1f4c66075d1142e452dbf681f0b12ec2b139c8a3f3af3a5f1b" Namespace="kube-system" Pod="coredns-66bc5c9577-x69q7" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--x69q7-" Mar 13 00:58:26.565979 containerd[1589]: 2026-03-13 00:58:26.424 [INFO][4774] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="78adab0d8d252e1f4c66075d1142e452dbf681f0b12ec2b139c8a3f3af3a5f1b" Namespace="kube-system" Pod="coredns-66bc5c9577-x69q7" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--x69q7-eth0" Mar 13 00:58:26.565979 containerd[1589]: 2026-03-13 00:58:26.484 [INFO][4806] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="78adab0d8d252e1f4c66075d1142e452dbf681f0b12ec2b139c8a3f3af3a5f1b" HandleID="k8s-pod-network.78adab0d8d252e1f4c66075d1142e452dbf681f0b12ec2b139c8a3f3af3a5f1b" Workload="localhost-k8s-coredns--66bc5c9577--x69q7-eth0" Mar 13 00:58:26.565979 containerd[1589]: 2026-03-13 00:58:26.492 [INFO][4806] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="78adab0d8d252e1f4c66075d1142e452dbf681f0b12ec2b139c8a3f3af3a5f1b" HandleID="k8s-pod-network.78adab0d8d252e1f4c66075d1142e452dbf681f0b12ec2b139c8a3f3af3a5f1b" Workload="localhost-k8s-coredns--66bc5c9577--x69q7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000276330), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-x69q7", "timestamp":"2026-03-13 00:58:26.484677715 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000112f20)} Mar 13 00:58:26.565979 containerd[1589]: 2026-03-13 00:58:26.492 [INFO][4806] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:58:26.565979 containerd[1589]: 2026-03-13 00:58:26.493 [INFO][4806] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:58:26.565979 containerd[1589]: 2026-03-13 00:58:26.493 [INFO][4806] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 13 00:58:26.565979 containerd[1589]: 2026-03-13 00:58:26.497 [INFO][4806] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.78adab0d8d252e1f4c66075d1142e452dbf681f0b12ec2b139c8a3f3af3a5f1b" host="localhost" Mar 13 00:58:26.565979 containerd[1589]: 2026-03-13 00:58:26.506 [INFO][4806] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 13 00:58:26.565979 containerd[1589]: 2026-03-13 00:58:26.514 [INFO][4806] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 13 00:58:26.565979 containerd[1589]: 2026-03-13 00:58:26.518 [INFO][4806] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 13 00:58:26.565979 containerd[1589]: 2026-03-13 00:58:26.522 [INFO][4806] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 13 00:58:26.565979 containerd[1589]: 2026-03-13 00:58:26.522 [INFO][4806] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.78adab0d8d252e1f4c66075d1142e452dbf681f0b12ec2b139c8a3f3af3a5f1b" host="localhost" Mar 13 00:58:26.565979 containerd[1589]: 2026-03-13 00:58:26.524 [INFO][4806] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.78adab0d8d252e1f4c66075d1142e452dbf681f0b12ec2b139c8a3f3af3a5f1b Mar 13 00:58:26.565979 containerd[1589]: 2026-03-13 00:58:26.529 [INFO][4806] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.78adab0d8d252e1f4c66075d1142e452dbf681f0b12ec2b139c8a3f3af3a5f1b" host="localhost" Mar 13 00:58:26.565979 containerd[1589]: 2026-03-13 00:58:26.539 [INFO][4806] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.78adab0d8d252e1f4c66075d1142e452dbf681f0b12ec2b139c8a3f3af3a5f1b" host="localhost" Mar 13 00:58:26.565979 containerd[1589]: 2026-03-13 00:58:26.539 [INFO][4806] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.78adab0d8d252e1f4c66075d1142e452dbf681f0b12ec2b139c8a3f3af3a5f1b" host="localhost" Mar 13 00:58:26.565979 containerd[1589]: 2026-03-13 00:58:26.539 [INFO][4806] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:58:26.565979 containerd[1589]: 2026-03-13 00:58:26.539 [INFO][4806] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="78adab0d8d252e1f4c66075d1142e452dbf681f0b12ec2b139c8a3f3af3a5f1b" HandleID="k8s-pod-network.78adab0d8d252e1f4c66075d1142e452dbf681f0b12ec2b139c8a3f3af3a5f1b" Workload="localhost-k8s-coredns--66bc5c9577--x69q7-eth0" Mar 13 00:58:26.567958 containerd[1589]: 2026-03-13 00:58:26.543 [INFO][4774] cni-plugin/k8s.go 418: Populated endpoint ContainerID="78adab0d8d252e1f4c66075d1142e452dbf681f0b12ec2b139c8a3f3af3a5f1b" Namespace="kube-system" Pod="coredns-66bc5c9577-x69q7" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--x69q7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--x69q7-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d9961465-6f87-4262-94c9-59f9582bd4cd", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 57, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-x69q7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7ff40f24d6e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:58:26.567958 containerd[1589]: 2026-03-13 00:58:26.543 [INFO][4774] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="78adab0d8d252e1f4c66075d1142e452dbf681f0b12ec2b139c8a3f3af3a5f1b" Namespace="kube-system" Pod="coredns-66bc5c9577-x69q7" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--x69q7-eth0" Mar 13 00:58:26.567958 containerd[1589]: 2026-03-13 00:58:26.543 [INFO][4774] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7ff40f24d6e ContainerID="78adab0d8d252e1f4c66075d1142e452dbf681f0b12ec2b139c8a3f3af3a5f1b" Namespace="kube-system" Pod="coredns-66bc5c9577-x69q7" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--x69q7-eth0" Mar 13 00:58:26.567958 containerd[1589]: 2026-03-13 00:58:26.548 [INFO][4774] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="78adab0d8d252e1f4c66075d1142e452dbf681f0b12ec2b139c8a3f3af3a5f1b" Namespace="kube-system" Pod="coredns-66bc5c9577-x69q7" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--x69q7-eth0" Mar 13 00:58:26.567958 containerd[1589]: 2026-03-13 00:58:26.549 [INFO][4774] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="78adab0d8d252e1f4c66075d1142e452dbf681f0b12ec2b139c8a3f3af3a5f1b" Namespace="kube-system" Pod="coredns-66bc5c9577-x69q7" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--x69q7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--x69q7-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d9961465-6f87-4262-94c9-59f9582bd4cd", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 57, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"78adab0d8d252e1f4c66075d1142e452dbf681f0b12ec2b139c8a3f3af3a5f1b", Pod:"coredns-66bc5c9577-x69q7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7ff40f24d6e", MAC:"5e:1b:fb:c3:56:68", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:58:26.567958 containerd[1589]: 2026-03-13 00:58:26.560 [INFO][4774] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="78adab0d8d252e1f4c66075d1142e452dbf681f0b12ec2b139c8a3f3af3a5f1b" Namespace="kube-system" Pod="coredns-66bc5c9577-x69q7" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--x69q7-eth0" Mar 13 00:58:26.608068 containerd[1589]: time="2026-03-13T00:58:26.607315680Z" level=info msg="connecting to shim 78adab0d8d252e1f4c66075d1142e452dbf681f0b12ec2b139c8a3f3af3a5f1b" address="unix:///run/containerd/s/d43f8eca7e363ce77cb8c969d4197efa98ec18d64aab7b0591610d3c0c486968" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:58:26.656831 systemd[1]: Started cri-containerd-78adab0d8d252e1f4c66075d1142e452dbf681f0b12ec2b139c8a3f3af3a5f1b.scope - libcontainer container 78adab0d8d252e1f4c66075d1142e452dbf681f0b12ec2b139c8a3f3af3a5f1b. Mar 13 00:58:26.660382 systemd-networkd[1444]: cali3da266aaf0d: Link UP Mar 13 00:58:26.660824 systemd-networkd[1444]: cali3da266aaf0d: Gained carrier Mar 13 00:58:26.683585 systemd-resolved[1447]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 13 00:58:26.691568 containerd[1589]: 2026-03-13 00:58:26.461 [INFO][4791] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--cccfbd5cf--df2cr-eth0 goldmane-cccfbd5cf- calico-system c4871750-34d0-4aea-af63-b853a4756c22 892 0 2026-03-13 00:57:46 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-cccfbd5cf-df2cr eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali3da266aaf0d [] [] }} ContainerID="3dd8f214592aa76985e3bc0686b699f39902797ba53f976d123376abbd9750ee" Namespace="calico-system" Pod="goldmane-cccfbd5cf-df2cr" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--df2cr-" Mar 13 00:58:26.691568 containerd[1589]: 2026-03-13 00:58:26.461 [INFO][4791] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3dd8f214592aa76985e3bc0686b699f39902797ba53f976d123376abbd9750ee" Namespace="calico-system" Pod="goldmane-cccfbd5cf-df2cr" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--df2cr-eth0" Mar 13 00:58:26.691568 containerd[1589]: 2026-03-13 00:58:26.514 [INFO][4819] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3dd8f214592aa76985e3bc0686b699f39902797ba53f976d123376abbd9750ee" HandleID="k8s-pod-network.3dd8f214592aa76985e3bc0686b699f39902797ba53f976d123376abbd9750ee" Workload="localhost-k8s-goldmane--cccfbd5cf--df2cr-eth0" Mar 13 00:58:26.691568 containerd[1589]: 2026-03-13 00:58:26.520 [INFO][4819] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3dd8f214592aa76985e3bc0686b699f39902797ba53f976d123376abbd9750ee" HandleID="k8s-pod-network.3dd8f214592aa76985e3bc0686b699f39902797ba53f976d123376abbd9750ee" Workload="localhost-k8s-goldmane--cccfbd5cf--df2cr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e9ae0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-cccfbd5cf-df2cr", "timestamp":"2026-03-13 00:58:26.514002319 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000187ce0)} Mar 13 00:58:26.691568 containerd[1589]: 2026-03-13 00:58:26.520 [INFO][4819] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:58:26.691568 containerd[1589]: 2026-03-13 00:58:26.539 [INFO][4819] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:58:26.691568 containerd[1589]: 2026-03-13 00:58:26.539 [INFO][4819] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 13 00:58:26.691568 containerd[1589]: 2026-03-13 00:58:26.598 [INFO][4819] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3dd8f214592aa76985e3bc0686b699f39902797ba53f976d123376abbd9750ee" host="localhost" Mar 13 00:58:26.691568 containerd[1589]: 2026-03-13 00:58:26.608 [INFO][4819] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 13 00:58:26.691568 containerd[1589]: 2026-03-13 00:58:26.614 [INFO][4819] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 13 00:58:26.691568 containerd[1589]: 2026-03-13 00:58:26.625 [INFO][4819] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 13 00:58:26.691568 containerd[1589]: 2026-03-13 00:58:26.629 [INFO][4819] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 13 00:58:26.691568 containerd[1589]: 2026-03-13 00:58:26.630 [INFO][4819] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3dd8f214592aa76985e3bc0686b699f39902797ba53f976d123376abbd9750ee" host="localhost" Mar 13 00:58:26.691568 containerd[1589]: 2026-03-13 00:58:26.633 [INFO][4819] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3dd8f214592aa76985e3bc0686b699f39902797ba53f976d123376abbd9750ee Mar 13 00:58:26.691568 containerd[1589]: 2026-03-13 00:58:26.640 [INFO][4819] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3dd8f214592aa76985e3bc0686b699f39902797ba53f976d123376abbd9750ee" host="localhost" Mar 13 00:58:26.691568 containerd[1589]: 2026-03-13 00:58:26.647 [INFO][4819] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.3dd8f214592aa76985e3bc0686b699f39902797ba53f976d123376abbd9750ee" host="localhost" Mar 13 00:58:26.691568 containerd[1589]: 2026-03-13 00:58:26.649 [INFO][4819] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.3dd8f214592aa76985e3bc0686b699f39902797ba53f976d123376abbd9750ee" host="localhost" Mar 13 00:58:26.691568 containerd[1589]: 2026-03-13 00:58:26.649 [INFO][4819] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:58:26.691568 containerd[1589]: 2026-03-13 00:58:26.649 [INFO][4819] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="3dd8f214592aa76985e3bc0686b699f39902797ba53f976d123376abbd9750ee" HandleID="k8s-pod-network.3dd8f214592aa76985e3bc0686b699f39902797ba53f976d123376abbd9750ee" Workload="localhost-k8s-goldmane--cccfbd5cf--df2cr-eth0" Mar 13 00:58:26.692101 containerd[1589]: 2026-03-13 00:58:26.653 [INFO][4791] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3dd8f214592aa76985e3bc0686b699f39902797ba53f976d123376abbd9750ee" Namespace="calico-system" Pod="goldmane-cccfbd5cf-df2cr" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--df2cr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--df2cr-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"c4871750-34d0-4aea-af63-b853a4756c22", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 57, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-cccfbd5cf-df2cr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3da266aaf0d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:58:26.692101 containerd[1589]: 2026-03-13 00:58:26.653 [INFO][4791] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="3dd8f214592aa76985e3bc0686b699f39902797ba53f976d123376abbd9750ee" Namespace="calico-system" Pod="goldmane-cccfbd5cf-df2cr" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--df2cr-eth0" Mar 13 00:58:26.692101 containerd[1589]: 2026-03-13 00:58:26.653 [INFO][4791] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3da266aaf0d ContainerID="3dd8f214592aa76985e3bc0686b699f39902797ba53f976d123376abbd9750ee" Namespace="calico-system" Pod="goldmane-cccfbd5cf-df2cr" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--df2cr-eth0" Mar 13 00:58:26.692101 containerd[1589]: 2026-03-13 00:58:26.667 [INFO][4791] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3dd8f214592aa76985e3bc0686b699f39902797ba53f976d123376abbd9750ee" Namespace="calico-system" Pod="goldmane-cccfbd5cf-df2cr" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--df2cr-eth0" Mar 13 00:58:26.692101 containerd[1589]: 2026-03-13 00:58:26.668 [INFO][4791] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3dd8f214592aa76985e3bc0686b699f39902797ba53f976d123376abbd9750ee" Namespace="calico-system" Pod="goldmane-cccfbd5cf-df2cr" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--df2cr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--df2cr-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"c4871750-34d0-4aea-af63-b853a4756c22", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 57, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3dd8f214592aa76985e3bc0686b699f39902797ba53f976d123376abbd9750ee", Pod:"goldmane-cccfbd5cf-df2cr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3da266aaf0d", MAC:"56:43:8c:d8:24:7b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:58:26.692101 containerd[1589]: 2026-03-13 00:58:26.685 [INFO][4791] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3dd8f214592aa76985e3bc0686b699f39902797ba53f976d123376abbd9750ee" Namespace="calico-system" Pod="goldmane-cccfbd5cf-df2cr" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--df2cr-eth0" Mar 13 00:58:26.741580 containerd[1589]: time="2026-03-13T00:58:26.741422487Z" level=info msg="connecting to shim 3dd8f214592aa76985e3bc0686b699f39902797ba53f976d123376abbd9750ee" address="unix:///run/containerd/s/828fd4d01dd7fade5c2b7c835b00279432b44d9fbdc2cf61013772eaaff7f3c8" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:58:26.742966 containerd[1589]: time="2026-03-13T00:58:26.742900733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-x69q7,Uid:d9961465-6f87-4262-94c9-59f9582bd4cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"78adab0d8d252e1f4c66075d1142e452dbf681f0b12ec2b139c8a3f3af3a5f1b\"" Mar 13 00:58:26.752782 containerd[1589]: time="2026-03-13T00:58:26.751897056Z" level=info msg="CreateContainer within sandbox \"78adab0d8d252e1f4c66075d1142e452dbf681f0b12ec2b139c8a3f3af3a5f1b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 13 00:58:26.779397 containerd[1589]: time="2026-03-13T00:58:26.779314215Z" level=info msg="Container 531189a259cbcde4386908c9752cb177928bf40faf82d5367a1c889b8bd64a55: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:58:26.793710 containerd[1589]: time="2026-03-13T00:58:26.793647632Z" level=info msg="CreateContainer within sandbox \"78adab0d8d252e1f4c66075d1142e452dbf681f0b12ec2b139c8a3f3af3a5f1b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"531189a259cbcde4386908c9752cb177928bf40faf82d5367a1c889b8bd64a55\"" Mar 13 00:58:26.796373 containerd[1589]: time="2026-03-13T00:58:26.796321170Z" level=info msg="StartContainer for \"531189a259cbcde4386908c9752cb177928bf40faf82d5367a1c889b8bd64a55\"" Mar 13 00:58:26.798933 systemd[1]: Started cri-containerd-3dd8f214592aa76985e3bc0686b699f39902797ba53f976d123376abbd9750ee.scope - libcontainer container 3dd8f214592aa76985e3bc0686b699f39902797ba53f976d123376abbd9750ee. Mar 13 00:58:26.805102 containerd[1589]: time="2026-03-13T00:58:26.805022665Z" level=info msg="connecting to shim 531189a259cbcde4386908c9752cb177928bf40faf82d5367a1c889b8bd64a55" address="unix:///run/containerd/s/d43f8eca7e363ce77cb8c969d4197efa98ec18d64aab7b0591610d3c0c486968" protocol=ttrpc version=3 Mar 13 00:58:26.824468 systemd-resolved[1447]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 13 00:58:26.837768 systemd[1]: Started cri-containerd-531189a259cbcde4386908c9752cb177928bf40faf82d5367a1c889b8bd64a55.scope - libcontainer container 531189a259cbcde4386908c9752cb177928bf40faf82d5367a1c889b8bd64a55. Mar 13 00:58:26.895181 containerd[1589]: time="2026-03-13T00:58:26.895144409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-df2cr,Uid:c4871750-34d0-4aea-af63-b853a4756c22,Namespace:calico-system,Attempt:0,} returns sandbox id \"3dd8f214592aa76985e3bc0686b699f39902797ba53f976d123376abbd9750ee\"" Mar 13 00:58:26.918917 containerd[1589]: time="2026-03-13T00:58:26.918745758Z" level=info msg="StartContainer for \"531189a259cbcde4386908c9752cb177928bf40faf82d5367a1c889b8bd64a55\" returns successfully" Mar 13 00:58:27.276734 kubelet[2809]: I0313 00:58:27.276455 2809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-x69q7" podStartSLOduration=59.276434768 podStartE2EDuration="59.276434768s" podCreationTimestamp="2026-03-13 00:57:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:58:27.259211273 +0000 UTC m=+63.295606173" watchObservedRunningTime="2026-03-13 00:58:27.276434768 +0000 UTC m=+63.312829667" Mar 13 00:58:27.536708 systemd-networkd[1444]: cali5d9160413e3: Gained IPv6LL Mar 13 00:58:27.658425 containerd[1589]: time="2026-03-13T00:58:27.658268958Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:58:27.659512 containerd[1589]: time="2026-03-13T00:58:27.659362097Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 13 00:58:27.661255 containerd[1589]: time="2026-03-13T00:58:27.661147243Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:58:27.664259 containerd[1589]: time="2026-03-13T00:58:27.664214826Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:58:27.664862 systemd-networkd[1444]: cali7ff40f24d6e: Gained IPv6LL Mar 13 00:58:27.666808 containerd[1589]: time="2026-03-13T00:58:27.666730124Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 1.893645356s" Mar 13 00:58:27.666808 containerd[1589]: time="2026-03-13T00:58:27.666762534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 13 00:58:27.668998 containerd[1589]: time="2026-03-13T00:58:27.668967081Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 13 00:58:27.674636 containerd[1589]: time="2026-03-13T00:58:27.674443029Z" level=info msg="CreateContainer within sandbox \"f68dddbe85f8737325e90a937fc8bb36e2894ed8e4f597fe2648bef74e028980\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 13 00:58:27.686782 containerd[1589]: time="2026-03-13T00:58:27.686677003Z" level=info msg="Container a74f5791cbded14baed8c18af4c2a935b6312f47720cad633a8559e9be2e376d: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:58:27.696754 containerd[1589]: time="2026-03-13T00:58:27.696629407Z" level=info msg="CreateContainer within sandbox \"f68dddbe85f8737325e90a937fc8bb36e2894ed8e4f597fe2648bef74e028980\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a74f5791cbded14baed8c18af4c2a935b6312f47720cad633a8559e9be2e376d\"" Mar 13 00:58:27.697803 containerd[1589]: time="2026-03-13T00:58:27.697729727Z" level=info msg="StartContainer for \"a74f5791cbded14baed8c18af4c2a935b6312f47720cad633a8559e9be2e376d\"" Mar 13 00:58:27.699267 containerd[1589]: time="2026-03-13T00:58:27.699145127Z" level=info msg="connecting to shim a74f5791cbded14baed8c18af4c2a935b6312f47720cad633a8559e9be2e376d" address="unix:///run/containerd/s/84e8617b1a8dfdc1103b23b2b8c103a8ae446fc34c89757a90cabbde57d2d15b" protocol=ttrpc version=3 Mar 13 00:58:27.727743 systemd[1]: Started cri-containerd-a74f5791cbded14baed8c18af4c2a935b6312f47720cad633a8559e9be2e376d.scope - libcontainer container a74f5791cbded14baed8c18af4c2a935b6312f47720cad633a8559e9be2e376d. Mar 13 00:58:27.793090 containerd[1589]: time="2026-03-13T00:58:27.792615504Z" level=info msg="StartContainer for \"a74f5791cbded14baed8c18af4c2a935b6312f47720cad633a8559e9be2e376d\" returns successfully" Mar 13 00:58:27.792886 systemd-networkd[1444]: cali3da266aaf0d: Gained IPv6LL Mar 13 00:58:28.263562 containerd[1589]: time="2026-03-13T00:58:28.263359327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-855966fd9b-cct9s,Uid:d66d3609-5c28-4fe9-b409-dba4dd44da00,Namespace:calico-system,Attempt:0,}" Mar 13 00:58:28.472670 systemd-networkd[1444]: cali77a4d7ad996: Link UP Mar 13 00:58:28.473739 systemd-networkd[1444]: cali77a4d7ad996: Gained carrier Mar 13 00:58:28.487426 kubelet[2809]: I0313 00:58:28.487335 2809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-855966fd9b-8t47p" podStartSLOduration=40.592005114 podStartE2EDuration="42.487315529s" podCreationTimestamp="2026-03-13 00:57:46 +0000 UTC" firstStartedPulling="2026-03-13 00:58:25.772600413 +0000 UTC m=+61.808995313" lastFinishedPulling="2026-03-13 00:58:27.667910828 +0000 UTC m=+63.704305728" observedRunningTime="2026-03-13 00:58:28.267353301 +0000 UTC m=+64.303748201" watchObservedRunningTime="2026-03-13 00:58:28.487315529 +0000 UTC m=+64.523710429" Mar 13 00:58:28.493194 containerd[1589]: 2026-03-13 00:58:28.335 [INFO][5051] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--855966fd9b--cct9s-eth0 calico-apiserver-855966fd9b- calico-system d66d3609-5c28-4fe9-b409-dba4dd44da00 883 0 2026-03-13 00:57:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:855966fd9b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-855966fd9b-cct9s eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali77a4d7ad996 [] [] }} ContainerID="be694b341060e7ab01e3917eeb0e49934cea890ae8233de6d27977f53ed7b67b" Namespace="calico-system" Pod="calico-apiserver-855966fd9b-cct9s" WorkloadEndpoint="localhost-k8s-calico--apiserver--855966fd9b--cct9s-" Mar 13 00:58:28.493194 containerd[1589]: 2026-03-13 00:58:28.335 [INFO][5051] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="be694b341060e7ab01e3917eeb0e49934cea890ae8233de6d27977f53ed7b67b" Namespace="calico-system" Pod="calico-apiserver-855966fd9b-cct9s" WorkloadEndpoint="localhost-k8s-calico--apiserver--855966fd9b--cct9s-eth0" Mar 13 00:58:28.493194 containerd[1589]: 2026-03-13 00:58:28.413 [INFO][5066] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="be694b341060e7ab01e3917eeb0e49934cea890ae8233de6d27977f53ed7b67b" HandleID="k8s-pod-network.be694b341060e7ab01e3917eeb0e49934cea890ae8233de6d27977f53ed7b67b" Workload="localhost-k8s-calico--apiserver--855966fd9b--cct9s-eth0" Mar 13 00:58:28.493194 containerd[1589]: 2026-03-13 00:58:28.425 [INFO][5066] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="be694b341060e7ab01e3917eeb0e49934cea890ae8233de6d27977f53ed7b67b" HandleID="k8s-pod-network.be694b341060e7ab01e3917eeb0e49934cea890ae8233de6d27977f53ed7b67b" Workload="localhost-k8s-calico--apiserver--855966fd9b--cct9s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00069c030), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-855966fd9b-cct9s", "timestamp":"2026-03-13 00:58:28.413109101 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000267a20)} Mar 13 00:58:28.493194 containerd[1589]: 2026-03-13 00:58:28.425 [INFO][5066] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:58:28.493194 containerd[1589]: 2026-03-13 00:58:28.425 [INFO][5066] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:58:28.493194 containerd[1589]: 2026-03-13 00:58:28.425 [INFO][5066] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 13 00:58:28.493194 containerd[1589]: 2026-03-13 00:58:28.429 [INFO][5066] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.be694b341060e7ab01e3917eeb0e49934cea890ae8233de6d27977f53ed7b67b" host="localhost" Mar 13 00:58:28.493194 containerd[1589]: 2026-03-13 00:58:28.434 [INFO][5066] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 13 00:58:28.493194 containerd[1589]: 2026-03-13 00:58:28.440 [INFO][5066] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 13 00:58:28.493194 containerd[1589]: 2026-03-13 00:58:28.443 [INFO][5066] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 13 00:58:28.493194 containerd[1589]: 2026-03-13 00:58:28.446 [INFO][5066] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 13 00:58:28.493194 containerd[1589]: 2026-03-13 00:58:28.446 [INFO][5066] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.be694b341060e7ab01e3917eeb0e49934cea890ae8233de6d27977f53ed7b67b" host="localhost" Mar 13 00:58:28.493194 containerd[1589]: 2026-03-13 00:58:28.448 [INFO][5066] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.be694b341060e7ab01e3917eeb0e49934cea890ae8233de6d27977f53ed7b67b Mar 13 00:58:28.493194 containerd[1589]: 2026-03-13 00:58:28.454 [INFO][5066] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.be694b341060e7ab01e3917eeb0e49934cea890ae8233de6d27977f53ed7b67b" host="localhost" Mar 13 00:58:28.493194 containerd[1589]: 2026-03-13 00:58:28.465 [INFO][5066] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.be694b341060e7ab01e3917eeb0e49934cea890ae8233de6d27977f53ed7b67b" host="localhost" Mar 13 00:58:28.493194 containerd[1589]: 2026-03-13 00:58:28.465 [INFO][5066] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.be694b341060e7ab01e3917eeb0e49934cea890ae8233de6d27977f53ed7b67b" host="localhost" Mar 13 00:58:28.493194 containerd[1589]: 2026-03-13 00:58:28.465 [INFO][5066] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:58:28.493194 containerd[1589]: 2026-03-13 00:58:28.465 [INFO][5066] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="be694b341060e7ab01e3917eeb0e49934cea890ae8233de6d27977f53ed7b67b" HandleID="k8s-pod-network.be694b341060e7ab01e3917eeb0e49934cea890ae8233de6d27977f53ed7b67b" Workload="localhost-k8s-calico--apiserver--855966fd9b--cct9s-eth0" Mar 13 00:58:28.493891 containerd[1589]: 2026-03-13 00:58:28.468 [INFO][5051] cni-plugin/k8s.go 418: Populated endpoint ContainerID="be694b341060e7ab01e3917eeb0e49934cea890ae8233de6d27977f53ed7b67b" Namespace="calico-system" Pod="calico-apiserver-855966fd9b-cct9s" WorkloadEndpoint="localhost-k8s-calico--apiserver--855966fd9b--cct9s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--855966fd9b--cct9s-eth0", GenerateName:"calico-apiserver-855966fd9b-", Namespace:"calico-system", SelfLink:"", UID:"d66d3609-5c28-4fe9-b409-dba4dd44da00", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 57, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"855966fd9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-855966fd9b-cct9s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali77a4d7ad996", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:58:28.493891 containerd[1589]: 2026-03-13 00:58:28.469 [INFO][5051] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="be694b341060e7ab01e3917eeb0e49934cea890ae8233de6d27977f53ed7b67b" Namespace="calico-system" Pod="calico-apiserver-855966fd9b-cct9s" WorkloadEndpoint="localhost-k8s-calico--apiserver--855966fd9b--cct9s-eth0" Mar 13 00:58:28.493891 containerd[1589]: 2026-03-13 00:58:28.469 [INFO][5051] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali77a4d7ad996 ContainerID="be694b341060e7ab01e3917eeb0e49934cea890ae8233de6d27977f53ed7b67b" Namespace="calico-system" Pod="calico-apiserver-855966fd9b-cct9s" WorkloadEndpoint="localhost-k8s-calico--apiserver--855966fd9b--cct9s-eth0" Mar 13 00:58:28.493891 containerd[1589]: 2026-03-13 00:58:28.473 [INFO][5051] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="be694b341060e7ab01e3917eeb0e49934cea890ae8233de6d27977f53ed7b67b" Namespace="calico-system" Pod="calico-apiserver-855966fd9b-cct9s" WorkloadEndpoint="localhost-k8s-calico--apiserver--855966fd9b--cct9s-eth0" Mar 13 00:58:28.493891 containerd[1589]: 2026-03-13 00:58:28.474 [INFO][5051] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="be694b341060e7ab01e3917eeb0e49934cea890ae8233de6d27977f53ed7b67b" Namespace="calico-system" Pod="calico-apiserver-855966fd9b-cct9s" WorkloadEndpoint="localhost-k8s-calico--apiserver--855966fd9b--cct9s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--855966fd9b--cct9s-eth0", GenerateName:"calico-apiserver-855966fd9b-", Namespace:"calico-system", SelfLink:"", UID:"d66d3609-5c28-4fe9-b409-dba4dd44da00", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 57, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"855966fd9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"be694b341060e7ab01e3917eeb0e49934cea890ae8233de6d27977f53ed7b67b", Pod:"calico-apiserver-855966fd9b-cct9s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali77a4d7ad996", MAC:"f2:f8:d7:de:b8:c7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:58:28.493891 containerd[1589]: 2026-03-13 00:58:28.486 [INFO][5051] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="be694b341060e7ab01e3917eeb0e49934cea890ae8233de6d27977f53ed7b67b" Namespace="calico-system" Pod="calico-apiserver-855966fd9b-cct9s" WorkloadEndpoint="localhost-k8s-calico--apiserver--855966fd9b--cct9s-eth0" Mar 13 00:58:28.658631 containerd[1589]: time="2026-03-13T00:58:28.658110940Z" level=info msg="connecting to shim be694b341060e7ab01e3917eeb0e49934cea890ae8233de6d27977f53ed7b67b" address="unix:///run/containerd/s/9171400cb99e22a1d3bb52af63c9ff611444c4105c1d622813540c9f0407b03e" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:58:28.797906 systemd[1]: Started cri-containerd-be694b341060e7ab01e3917eeb0e49934cea890ae8233de6d27977f53ed7b67b.scope - libcontainer container be694b341060e7ab01e3917eeb0e49934cea890ae8233de6d27977f53ed7b67b. Mar 13 00:58:28.857090 systemd-resolved[1447]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 13 00:58:28.923393 containerd[1589]: time="2026-03-13T00:58:28.923047459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-855966fd9b-cct9s,Uid:d66d3609-5c28-4fe9-b409-dba4dd44da00,Namespace:calico-system,Attempt:0,} returns sandbox id \"be694b341060e7ab01e3917eeb0e49934cea890ae8233de6d27977f53ed7b67b\"" Mar 13 00:58:28.940415 containerd[1589]: time="2026-03-13T00:58:28.940374178Z" level=info msg="CreateContainer within sandbox \"be694b341060e7ab01e3917eeb0e49934cea890ae8233de6d27977f53ed7b67b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 13 00:58:28.959700 containerd[1589]: time="2026-03-13T00:58:28.954745941Z" level=info msg="Container f7311058ea45861b86ecd05e68488146c91f85007669985caf6520b4fac58183: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:58:28.972776 containerd[1589]: time="2026-03-13T00:58:28.972743994Z" level=info msg="CreateContainer within sandbox \"be694b341060e7ab01e3917eeb0e49934cea890ae8233de6d27977f53ed7b67b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f7311058ea45861b86ecd05e68488146c91f85007669985caf6520b4fac58183\"" Mar 13 00:58:28.973772 containerd[1589]: time="2026-03-13T00:58:28.973462389Z" level=info msg="StartContainer for \"f7311058ea45861b86ecd05e68488146c91f85007669985caf6520b4fac58183\"" Mar 13 00:58:28.977398 containerd[1589]: time="2026-03-13T00:58:28.977348149Z" level=info msg="connecting to shim f7311058ea45861b86ecd05e68488146c91f85007669985caf6520b4fac58183" address="unix:///run/containerd/s/9171400cb99e22a1d3bb52af63c9ff611444c4105c1d622813540c9f0407b03e" protocol=ttrpc version=3 Mar 13 00:58:29.016846 systemd[1]: Started cri-containerd-f7311058ea45861b86ecd05e68488146c91f85007669985caf6520b4fac58183.scope - libcontainer container f7311058ea45861b86ecd05e68488146c91f85007669985caf6520b4fac58183. Mar 13 00:58:29.141231 containerd[1589]: time="2026-03-13T00:58:29.141096601Z" level=info msg="StartContainer for \"f7311058ea45861b86ecd05e68488146c91f85007669985caf6520b4fac58183\" returns successfully" Mar 13 00:58:29.638078 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1235363809.mount: Deactivated successfully. Mar 13 00:58:30.267653 containerd[1589]: time="2026-03-13T00:58:30.265675049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58cdb8c654-b6w4d,Uid:4a747119-f21e-443b-b222-e61348ddf3e6,Namespace:calico-system,Attempt:0,}" Mar 13 00:58:30.272648 kubelet[2809]: I0313 00:58:30.272443 2809 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:58:30.416785 systemd-networkd[1444]: cali77a4d7ad996: Gained IPv6LL Mar 13 00:58:30.498409 containerd[1589]: time="2026-03-13T00:58:30.497852094Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:58:30.509898 containerd[1589]: time="2026-03-13T00:58:30.509861400Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 13 00:58:30.511428 containerd[1589]: time="2026-03-13T00:58:30.511313421Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:58:30.515110 containerd[1589]: time="2026-03-13T00:58:30.515001938Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:58:30.516157 containerd[1589]: time="2026-03-13T00:58:30.516061227Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 2.847061664s" Mar 13 00:58:30.516157 containerd[1589]: time="2026-03-13T00:58:30.516129544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 13 00:58:30.526360 containerd[1589]: time="2026-03-13T00:58:30.525739730Z" level=info msg="CreateContainer within sandbox \"3dd8f214592aa76985e3bc0686b699f39902797ba53f976d123376abbd9750ee\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 13 00:58:30.538015 containerd[1589]: time="2026-03-13T00:58:30.537935925Z" level=info msg="Container 8dda709b5c57197e777cdbbac34a15dcc2deb15324290e1e4b619d3ae7e13462: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:58:30.552379 containerd[1589]: time="2026-03-13T00:58:30.552306111Z" level=info msg="CreateContainer within sandbox \"3dd8f214592aa76985e3bc0686b699f39902797ba53f976d123376abbd9750ee\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"8dda709b5c57197e777cdbbac34a15dcc2deb15324290e1e4b619d3ae7e13462\"" Mar 13 00:58:30.553841 containerd[1589]: time="2026-03-13T00:58:30.553757437Z" level=info msg="StartContainer for \"8dda709b5c57197e777cdbbac34a15dcc2deb15324290e1e4b619d3ae7e13462\"" Mar 13 00:58:30.557315 systemd-networkd[1444]: calic98e3b95839: Link UP Mar 13 00:58:30.559901 systemd-networkd[1444]: calic98e3b95839: Gained carrier Mar 13 00:58:30.560117 containerd[1589]: time="2026-03-13T00:58:30.560074112Z" level=info msg="connecting to shim 8dda709b5c57197e777cdbbac34a15dcc2deb15324290e1e4b619d3ae7e13462" address="unix:///run/containerd/s/828fd4d01dd7fade5c2b7c835b00279432b44d9fbdc2cf61013772eaaff7f3c8" protocol=ttrpc version=3 Mar 13 00:58:30.575616 kubelet[2809]: I0313 00:58:30.575379 2809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-855966fd9b-cct9s" podStartSLOduration=44.575363636 podStartE2EDuration="44.575363636s" podCreationTimestamp="2026-03-13 00:57:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:58:29.290723081 +0000 UTC m=+65.327117991" watchObservedRunningTime="2026-03-13 00:58:30.575363636 +0000 UTC m=+66.611758537" Mar 13 00:58:30.585387 containerd[1589]: 2026-03-13 00:58:30.382 [INFO][5199] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--58cdb8c654--b6w4d-eth0 calico-kube-controllers-58cdb8c654- calico-system 4a747119-f21e-443b-b222-e61348ddf3e6 894 0 2026-03-13 00:57:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:58cdb8c654 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-58cdb8c654-b6w4d eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic98e3b95839 [] [] }} ContainerID="442c48e0ddcbae71039edd8d1ded9bb3c8812d13daeb1c7d8308e56c65a1d894" Namespace="calico-system" Pod="calico-kube-controllers-58cdb8c654-b6w4d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58cdb8c654--b6w4d-" Mar 13 00:58:30.585387 containerd[1589]: 2026-03-13 00:58:30.382 [INFO][5199] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="442c48e0ddcbae71039edd8d1ded9bb3c8812d13daeb1c7d8308e56c65a1d894" Namespace="calico-system" Pod="calico-kube-controllers-58cdb8c654-b6w4d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58cdb8c654--b6w4d-eth0" Mar 13 00:58:30.585387 containerd[1589]: 2026-03-13 00:58:30.475 [INFO][5215] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="442c48e0ddcbae71039edd8d1ded9bb3c8812d13daeb1c7d8308e56c65a1d894" HandleID="k8s-pod-network.442c48e0ddcbae71039edd8d1ded9bb3c8812d13daeb1c7d8308e56c65a1d894" Workload="localhost-k8s-calico--kube--controllers--58cdb8c654--b6w4d-eth0" Mar 13 00:58:30.585387 containerd[1589]: 2026-03-13 00:58:30.488 [INFO][5215] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="442c48e0ddcbae71039edd8d1ded9bb3c8812d13daeb1c7d8308e56c65a1d894" HandleID="k8s-pod-network.442c48e0ddcbae71039edd8d1ded9bb3c8812d13daeb1c7d8308e56c65a1d894" Workload="localhost-k8s-calico--kube--controllers--58cdb8c654--b6w4d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004c6570), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-58cdb8c654-b6w4d", "timestamp":"2026-03-13 00:58:30.47525801 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00069eb00)} Mar 13 00:58:30.585387 containerd[1589]: 2026-03-13 00:58:30.489 [INFO][5215] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:58:30.585387 containerd[1589]: 2026-03-13 00:58:30.489 [INFO][5215] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:58:30.585387 containerd[1589]: 2026-03-13 00:58:30.489 [INFO][5215] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 13 00:58:30.585387 containerd[1589]: 2026-03-13 00:58:30.494 [INFO][5215] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.442c48e0ddcbae71039edd8d1ded9bb3c8812d13daeb1c7d8308e56c65a1d894" host="localhost" Mar 13 00:58:30.585387 containerd[1589]: 2026-03-13 00:58:30.506 [INFO][5215] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 13 00:58:30.585387 containerd[1589]: 2026-03-13 00:58:30.513 [INFO][5215] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 13 00:58:30.585387 containerd[1589]: 2026-03-13 00:58:30.516 [INFO][5215] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 13 00:58:30.585387 containerd[1589]: 2026-03-13 00:58:30.521 [INFO][5215] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 13 00:58:30.585387 containerd[1589]: 2026-03-13 00:58:30.521 [INFO][5215] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.442c48e0ddcbae71039edd8d1ded9bb3c8812d13daeb1c7d8308e56c65a1d894" host="localhost" Mar 13 00:58:30.585387 containerd[1589]: 2026-03-13 00:58:30.525 [INFO][5215] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.442c48e0ddcbae71039edd8d1ded9bb3c8812d13daeb1c7d8308e56c65a1d894 Mar 13 00:58:30.585387 containerd[1589]: 2026-03-13 00:58:30.530 [INFO][5215] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.442c48e0ddcbae71039edd8d1ded9bb3c8812d13daeb1c7d8308e56c65a1d894" host="localhost" Mar 13 00:58:30.585387 containerd[1589]: 2026-03-13 00:58:30.538 [INFO][5215] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.442c48e0ddcbae71039edd8d1ded9bb3c8812d13daeb1c7d8308e56c65a1d894" host="localhost" Mar 13 00:58:30.585387 containerd[1589]: 2026-03-13 00:58:30.538 [INFO][5215] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.442c48e0ddcbae71039edd8d1ded9bb3c8812d13daeb1c7d8308e56c65a1d894" host="localhost" Mar 13 00:58:30.585387 containerd[1589]: 2026-03-13 00:58:30.538 [INFO][5215] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:58:30.585387 containerd[1589]: 2026-03-13 00:58:30.538 [INFO][5215] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="442c48e0ddcbae71039edd8d1ded9bb3c8812d13daeb1c7d8308e56c65a1d894" HandleID="k8s-pod-network.442c48e0ddcbae71039edd8d1ded9bb3c8812d13daeb1c7d8308e56c65a1d894" Workload="localhost-k8s-calico--kube--controllers--58cdb8c654--b6w4d-eth0" Mar 13 00:58:30.586326 containerd[1589]: 2026-03-13 00:58:30.546 [INFO][5199] cni-plugin/k8s.go 418: Populated endpoint ContainerID="442c48e0ddcbae71039edd8d1ded9bb3c8812d13daeb1c7d8308e56c65a1d894" Namespace="calico-system" Pod="calico-kube-controllers-58cdb8c654-b6w4d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58cdb8c654--b6w4d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--58cdb8c654--b6w4d-eth0", GenerateName:"calico-kube-controllers-58cdb8c654-", Namespace:"calico-system", SelfLink:"", UID:"4a747119-f21e-443b-b222-e61348ddf3e6", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 57, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58cdb8c654", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-58cdb8c654-b6w4d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic98e3b95839", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:58:30.586326 containerd[1589]: 2026-03-13 00:58:30.546 [INFO][5199] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="442c48e0ddcbae71039edd8d1ded9bb3c8812d13daeb1c7d8308e56c65a1d894" Namespace="calico-system" Pod="calico-kube-controllers-58cdb8c654-b6w4d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58cdb8c654--b6w4d-eth0" Mar 13 00:58:30.586326 containerd[1589]: 2026-03-13 00:58:30.546 [INFO][5199] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic98e3b95839 ContainerID="442c48e0ddcbae71039edd8d1ded9bb3c8812d13daeb1c7d8308e56c65a1d894" Namespace="calico-system" Pod="calico-kube-controllers-58cdb8c654-b6w4d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58cdb8c654--b6w4d-eth0" Mar 13 00:58:30.586326 containerd[1589]: 2026-03-13 00:58:30.562 [INFO][5199] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="442c48e0ddcbae71039edd8d1ded9bb3c8812d13daeb1c7d8308e56c65a1d894" Namespace="calico-system" Pod="calico-kube-controllers-58cdb8c654-b6w4d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58cdb8c654--b6w4d-eth0" Mar 13 00:58:30.586326 containerd[1589]: 2026-03-13 00:58:30.563 [INFO][5199] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="442c48e0ddcbae71039edd8d1ded9bb3c8812d13daeb1c7d8308e56c65a1d894" Namespace="calico-system" Pod="calico-kube-controllers-58cdb8c654-b6w4d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58cdb8c654--b6w4d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--58cdb8c654--b6w4d-eth0", GenerateName:"calico-kube-controllers-58cdb8c654-", Namespace:"calico-system", SelfLink:"", UID:"4a747119-f21e-443b-b222-e61348ddf3e6", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 57, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58cdb8c654", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"442c48e0ddcbae71039edd8d1ded9bb3c8812d13daeb1c7d8308e56c65a1d894", Pod:"calico-kube-controllers-58cdb8c654-b6w4d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic98e3b95839", MAC:"8a:52:5b:8a:bb:9e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:58:30.586326 containerd[1589]: 2026-03-13 00:58:30.577 [INFO][5199] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="442c48e0ddcbae71039edd8d1ded9bb3c8812d13daeb1c7d8308e56c65a1d894" Namespace="calico-system" Pod="calico-kube-controllers-58cdb8c654-b6w4d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58cdb8c654--b6w4d-eth0" Mar 13 00:58:30.608733 systemd[1]: Started cri-containerd-8dda709b5c57197e777cdbbac34a15dcc2deb15324290e1e4b619d3ae7e13462.scope - libcontainer container 8dda709b5c57197e777cdbbac34a15dcc2deb15324290e1e4b619d3ae7e13462. Mar 13 00:58:30.657240 containerd[1589]: time="2026-03-13T00:58:30.657095214Z" level=info msg="connecting to shim 442c48e0ddcbae71039edd8d1ded9bb3c8812d13daeb1c7d8308e56c65a1d894" address="unix:///run/containerd/s/5e54012e5d0dc912a954d4b5b17ef5764d08048b3cccf239303a48c021ed8347" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:58:30.696227 systemd[1]: Started cri-containerd-442c48e0ddcbae71039edd8d1ded9bb3c8812d13daeb1c7d8308e56c65a1d894.scope - libcontainer container 442c48e0ddcbae71039edd8d1ded9bb3c8812d13daeb1c7d8308e56c65a1d894. Mar 13 00:58:30.722389 systemd-resolved[1447]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 13 00:58:30.740437 containerd[1589]: time="2026-03-13T00:58:30.740341572Z" level=info msg="StartContainer for \"8dda709b5c57197e777cdbbac34a15dcc2deb15324290e1e4b619d3ae7e13462\" returns successfully" Mar 13 00:58:30.775741 containerd[1589]: time="2026-03-13T00:58:30.775676789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58cdb8c654-b6w4d,Uid:4a747119-f21e-443b-b222-e61348ddf3e6,Namespace:calico-system,Attempt:0,} returns sandbox id \"442c48e0ddcbae71039edd8d1ded9bb3c8812d13daeb1c7d8308e56c65a1d894\"" Mar 13 00:58:30.779585 containerd[1589]: time="2026-03-13T00:58:30.779379983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 13 00:58:31.475954 kubelet[2809]: I0313 00:58:31.475389 2809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-df2cr" podStartSLOduration=41.855980307 podStartE2EDuration="45.475371056s" podCreationTimestamp="2026-03-13 00:57:46 +0000 UTC" firstStartedPulling="2026-03-13 00:58:26.899467027 +0000 UTC m=+62.935861926" lastFinishedPulling="2026-03-13 00:58:30.518857776 +0000 UTC m=+66.555252675" observedRunningTime="2026-03-13 00:58:31.298453246 +0000 UTC m=+67.334848146" watchObservedRunningTime="2026-03-13 00:58:31.475371056 +0000 UTC m=+67.511765956" Mar 13 00:58:32.144992 systemd-networkd[1444]: calic98e3b95839: Gained IPv6LL Mar 13 00:58:32.564580 containerd[1589]: time="2026-03-13T00:58:32.564386023Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:58:32.573279 containerd[1589]: time="2026-03-13T00:58:32.573202266Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 13 00:58:32.574749 containerd[1589]: time="2026-03-13T00:58:32.574661850Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:58:32.577466 containerd[1589]: time="2026-03-13T00:58:32.577384031Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:58:32.578126 containerd[1589]: time="2026-03-13T00:58:32.578032636Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 1.798595255s" Mar 13 00:58:32.578126 containerd[1589]: time="2026-03-13T00:58:32.578098428Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 13 00:58:32.598800 containerd[1589]: time="2026-03-13T00:58:32.597861779Z" level=info msg="CreateContainer within sandbox \"442c48e0ddcbae71039edd8d1ded9bb3c8812d13daeb1c7d8308e56c65a1d894\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 13 00:58:32.606228 containerd[1589]: time="2026-03-13T00:58:32.606159055Z" level=info msg="Container 1f2367f5061c4d0c075cb80ff8fda985859cc100b3ff8e3543d09f8f7498037a: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:58:32.616185 containerd[1589]: time="2026-03-13T00:58:32.616101452Z" level=info msg="CreateContainer within sandbox \"442c48e0ddcbae71039edd8d1ded9bb3c8812d13daeb1c7d8308e56c65a1d894\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1f2367f5061c4d0c075cb80ff8fda985859cc100b3ff8e3543d09f8f7498037a\"" Mar 13 00:58:32.616998 containerd[1589]: time="2026-03-13T00:58:32.616849483Z" level=info msg="StartContainer for \"1f2367f5061c4d0c075cb80ff8fda985859cc100b3ff8e3543d09f8f7498037a\"" Mar 13 00:58:32.618442 containerd[1589]: time="2026-03-13T00:58:32.618289200Z" level=info msg="connecting to shim 1f2367f5061c4d0c075cb80ff8fda985859cc100b3ff8e3543d09f8f7498037a" address="unix:///run/containerd/s/5e54012e5d0dc912a954d4b5b17ef5764d08048b3cccf239303a48c021ed8347" protocol=ttrpc version=3 Mar 13 00:58:32.651729 systemd[1]: Started cri-containerd-1f2367f5061c4d0c075cb80ff8fda985859cc100b3ff8e3543d09f8f7498037a.scope - libcontainer container 1f2367f5061c4d0c075cb80ff8fda985859cc100b3ff8e3543d09f8f7498037a. Mar 13 00:58:32.721701 containerd[1589]: time="2026-03-13T00:58:32.721631166Z" level=info msg="StartContainer for \"1f2367f5061c4d0c075cb80ff8fda985859cc100b3ff8e3543d09f8f7498037a\" returns successfully" Mar 13 00:58:33.307290 kubelet[2809]: I0313 00:58:33.307119 2809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-58cdb8c654-b6w4d" podStartSLOduration=43.507065717 podStartE2EDuration="45.307101935s" podCreationTimestamp="2026-03-13 00:57:48 +0000 UTC" firstStartedPulling="2026-03-13 00:58:30.7790235 +0000 UTC m=+66.815418400" lastFinishedPulling="2026-03-13 00:58:32.579059717 +0000 UTC m=+68.615454618" observedRunningTime="2026-03-13 00:58:33.306673044 +0000 UTC m=+69.343067975" watchObservedRunningTime="2026-03-13 00:58:33.307101935 +0000 UTC m=+69.343496834" Mar 13 00:58:37.380019 systemd[1]: Started sshd@9-10.0.0.147:22-10.0.0.1:47660.service - OpenSSH per-connection server daemon (10.0.0.1:47660). Mar 13 00:58:37.470791 sshd[5468]: Accepted publickey for core from 10.0.0.1 port 47660 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:58:37.472849 sshd-session[5468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:58:37.483394 systemd-logind[1563]: New session 10 of user core. Mar 13 00:58:37.498750 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 13 00:58:37.724375 sshd[5471]: Connection closed by 10.0.0.1 port 47660 Mar 13 00:58:37.725750 sshd-session[5468]: pam_unix(sshd:session): session closed for user core Mar 13 00:58:37.731624 systemd[1]: sshd@9-10.0.0.147:22-10.0.0.1:47660.service: Deactivated successfully. Mar 13 00:58:37.734686 systemd[1]: session-10.scope: Deactivated successfully. Mar 13 00:58:37.736579 systemd-logind[1563]: Session 10 logged out. Waiting for processes to exit. Mar 13 00:58:37.739413 systemd-logind[1563]: Removed session 10. Mar 13 00:58:42.740251 systemd[1]: Started sshd@10-10.0.0.147:22-10.0.0.1:40770.service - OpenSSH per-connection server daemon (10.0.0.1:40770). Mar 13 00:58:42.823967 sshd[5505]: Accepted publickey for core from 10.0.0.1 port 40770 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:58:42.825851 sshd-session[5505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:58:42.831754 systemd-logind[1563]: New session 11 of user core. Mar 13 00:58:42.841769 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 13 00:58:42.951763 sshd[5508]: Connection closed by 10.0.0.1 port 40770 Mar 13 00:58:42.952809 sshd-session[5505]: pam_unix(sshd:session): session closed for user core Mar 13 00:58:42.960581 systemd[1]: sshd@10-10.0.0.147:22-10.0.0.1:40770.service: Deactivated successfully. Mar 13 00:58:42.963324 systemd[1]: session-11.scope: Deactivated successfully. Mar 13 00:58:42.965001 systemd-logind[1563]: Session 11 logged out. Waiting for processes to exit. Mar 13 00:58:42.966968 systemd-logind[1563]: Removed session 11. Mar 13 00:58:47.977002 systemd[1]: Started sshd@11-10.0.0.147:22-10.0.0.1:40772.service - OpenSSH per-connection server daemon (10.0.0.1:40772). Mar 13 00:58:48.091510 sshd[5547]: Accepted publickey for core from 10.0.0.1 port 40772 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:58:48.094212 sshd-session[5547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:58:48.104119 systemd-logind[1563]: New session 12 of user core. Mar 13 00:58:48.120758 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 13 00:58:48.296690 sshd[5550]: Connection closed by 10.0.0.1 port 40772 Mar 13 00:58:48.298560 sshd-session[5547]: pam_unix(sshd:session): session closed for user core Mar 13 00:58:48.309221 systemd[1]: sshd@11-10.0.0.147:22-10.0.0.1:40772.service: Deactivated successfully. Mar 13 00:58:48.314141 systemd[1]: session-12.scope: Deactivated successfully. Mar 13 00:58:48.317288 systemd-logind[1563]: Session 12 logged out. Waiting for processes to exit. Mar 13 00:58:48.320185 systemd-logind[1563]: Removed session 12. Mar 13 00:58:52.936082 kubelet[2809]: I0313 00:58:52.935952 2809 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:58:53.310391 systemd[1]: Started sshd@12-10.0.0.147:22-10.0.0.1:33908.service - OpenSSH per-connection server daemon (10.0.0.1:33908). Mar 13 00:58:53.380369 sshd[5576]: Accepted publickey for core from 10.0.0.1 port 33908 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:58:53.382126 sshd-session[5576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:58:53.388760 systemd-logind[1563]: New session 13 of user core. Mar 13 00:58:53.402711 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 13 00:58:53.540414 sshd[5579]: Connection closed by 10.0.0.1 port 33908 Mar 13 00:58:53.542119 sshd-session[5576]: pam_unix(sshd:session): session closed for user core Mar 13 00:58:53.555625 systemd[1]: sshd@12-10.0.0.147:22-10.0.0.1:33908.service: Deactivated successfully. Mar 13 00:58:53.558198 systemd[1]: session-13.scope: Deactivated successfully. Mar 13 00:58:53.559381 systemd-logind[1563]: Session 13 logged out. Waiting for processes to exit. Mar 13 00:58:53.563634 systemd[1]: Started sshd@13-10.0.0.147:22-10.0.0.1:33924.service - OpenSSH per-connection server daemon (10.0.0.1:33924). Mar 13 00:58:53.565061 systemd-logind[1563]: Removed session 13. Mar 13 00:58:53.632768 sshd[5601]: Accepted publickey for core from 10.0.0.1 port 33924 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:58:53.634946 sshd-session[5601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:58:53.641425 systemd-logind[1563]: New session 14 of user core. Mar 13 00:58:53.649749 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 13 00:58:53.873579 sshd[5604]: Connection closed by 10.0.0.1 port 33924 Mar 13 00:58:53.874860 sshd-session[5601]: pam_unix(sshd:session): session closed for user core Mar 13 00:58:53.893268 systemd[1]: sshd@13-10.0.0.147:22-10.0.0.1:33924.service: Deactivated successfully. Mar 13 00:58:53.899569 systemd[1]: session-14.scope: Deactivated successfully. Mar 13 00:58:53.903376 systemd-logind[1563]: Session 14 logged out. Waiting for processes to exit. Mar 13 00:58:53.911215 systemd[1]: Started sshd@14-10.0.0.147:22-10.0.0.1:33940.service - OpenSSH per-connection server daemon (10.0.0.1:33940). Mar 13 00:58:53.914922 systemd-logind[1563]: Removed session 14. Mar 13 00:58:53.991770 sshd[5625]: Accepted publickey for core from 10.0.0.1 port 33940 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:58:53.993370 sshd-session[5625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:58:54.000370 systemd-logind[1563]: New session 15 of user core. Mar 13 00:58:54.012717 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 13 00:58:54.118094 sshd[5628]: Connection closed by 10.0.0.1 port 33940 Mar 13 00:58:54.118540 sshd-session[5625]: pam_unix(sshd:session): session closed for user core Mar 13 00:58:54.124742 systemd[1]: sshd@14-10.0.0.147:22-10.0.0.1:33940.service: Deactivated successfully. Mar 13 00:58:54.129351 systemd[1]: session-15.scope: Deactivated successfully. Mar 13 00:58:54.130722 systemd-logind[1563]: Session 15 logged out. Waiting for processes to exit. Mar 13 00:58:54.132408 systemd-logind[1563]: Removed session 15. Mar 13 00:58:59.136751 systemd[1]: Started sshd@15-10.0.0.147:22-10.0.0.1:33944.service - OpenSSH per-connection server daemon (10.0.0.1:33944). Mar 13 00:58:59.199915 sshd[5653]: Accepted publickey for core from 10.0.0.1 port 33944 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:58:59.201822 sshd-session[5653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:58:59.208263 systemd-logind[1563]: New session 16 of user core. Mar 13 00:58:59.221792 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 13 00:58:59.358283 sshd[5656]: Connection closed by 10.0.0.1 port 33944 Mar 13 00:58:59.358965 sshd-session[5653]: pam_unix(sshd:session): session closed for user core Mar 13 00:58:59.371835 systemd[1]: sshd@15-10.0.0.147:22-10.0.0.1:33944.service: Deactivated successfully. Mar 13 00:58:59.373985 systemd[1]: session-16.scope: Deactivated successfully. Mar 13 00:58:59.375056 systemd-logind[1563]: Session 16 logged out. Waiting for processes to exit. Mar 13 00:58:59.379119 systemd[1]: Started sshd@16-10.0.0.147:22-10.0.0.1:33960.service - OpenSSH per-connection server daemon (10.0.0.1:33960). Mar 13 00:58:59.380942 systemd-logind[1563]: Removed session 16. Mar 13 00:58:59.466051 sshd[5669]: Accepted publickey for core from 10.0.0.1 port 33960 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:58:59.467598 sshd-session[5669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:58:59.473972 systemd-logind[1563]: New session 17 of user core. Mar 13 00:58:59.480721 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 13 00:58:59.736634 sshd[5672]: Connection closed by 10.0.0.1 port 33960 Mar 13 00:58:59.737582 sshd-session[5669]: pam_unix(sshd:session): session closed for user core Mar 13 00:58:59.755796 systemd[1]: sshd@16-10.0.0.147:22-10.0.0.1:33960.service: Deactivated successfully. Mar 13 00:58:59.759033 systemd[1]: session-17.scope: Deactivated successfully. Mar 13 00:58:59.761809 systemd-logind[1563]: Session 17 logged out. Waiting for processes to exit. Mar 13 00:58:59.764814 systemd[1]: Started sshd@17-10.0.0.147:22-10.0.0.1:33970.service - OpenSSH per-connection server daemon (10.0.0.1:33970). Mar 13 00:58:59.766929 systemd-logind[1563]: Removed session 17. Mar 13 00:58:59.838738 sshd[5684]: Accepted publickey for core from 10.0.0.1 port 33970 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:58:59.840749 sshd-session[5684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:58:59.847689 systemd-logind[1563]: New session 18 of user core. Mar 13 00:58:59.865717 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 13 00:59:00.488962 sshd[5687]: Connection closed by 10.0.0.1 port 33970 Mar 13 00:59:00.489422 sshd-session[5684]: pam_unix(sshd:session): session closed for user core Mar 13 00:59:00.502325 systemd[1]: sshd@17-10.0.0.147:22-10.0.0.1:33970.service: Deactivated successfully. Mar 13 00:59:00.506366 systemd[1]: session-18.scope: Deactivated successfully. Mar 13 00:59:00.515916 systemd-logind[1563]: Session 18 logged out. Waiting for processes to exit. Mar 13 00:59:00.519886 systemd[1]: Started sshd@18-10.0.0.147:22-10.0.0.1:53860.service - OpenSSH per-connection server daemon (10.0.0.1:53860). Mar 13 00:59:00.522345 systemd-logind[1563]: Removed session 18. Mar 13 00:59:00.593027 sshd[5711]: Accepted publickey for core from 10.0.0.1 port 53860 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:59:00.594671 sshd-session[5711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:59:00.601547 systemd-logind[1563]: New session 19 of user core. Mar 13 00:59:00.621704 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 13 00:59:00.999861 sshd[5716]: Connection closed by 10.0.0.1 port 53860 Mar 13 00:59:01.000963 sshd-session[5711]: pam_unix(sshd:session): session closed for user core Mar 13 00:59:01.012121 systemd[1]: sshd@18-10.0.0.147:22-10.0.0.1:53860.service: Deactivated successfully. Mar 13 00:59:01.017108 systemd[1]: session-19.scope: Deactivated successfully. Mar 13 00:59:01.020590 systemd-logind[1563]: Session 19 logged out. Waiting for processes to exit. Mar 13 00:59:01.025410 systemd[1]: Started sshd@19-10.0.0.147:22-10.0.0.1:53862.service - OpenSSH per-connection server daemon (10.0.0.1:53862). Mar 13 00:59:01.027055 systemd-logind[1563]: Removed session 19. Mar 13 00:59:01.116690 sshd[5727]: Accepted publickey for core from 10.0.0.1 port 53862 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:59:01.118621 sshd-session[5727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:59:01.125364 systemd-logind[1563]: New session 20 of user core. Mar 13 00:59:01.138690 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 13 00:59:01.249372 sshd[5730]: Connection closed by 10.0.0.1 port 53862 Mar 13 00:59:01.250163 sshd-session[5727]: pam_unix(sshd:session): session closed for user core Mar 13 00:59:01.255919 systemd[1]: sshd@19-10.0.0.147:22-10.0.0.1:53862.service: Deactivated successfully. Mar 13 00:59:01.258783 systemd[1]: session-20.scope: Deactivated successfully. Mar 13 00:59:01.260973 systemd-logind[1563]: Session 20 logged out. Waiting for processes to exit. Mar 13 00:59:01.263008 systemd-logind[1563]: Removed session 20. Mar 13 00:59:06.264322 systemd[1]: Started sshd@20-10.0.0.147:22-10.0.0.1:53874.service - OpenSSH per-connection server daemon (10.0.0.1:53874). Mar 13 00:59:06.344093 sshd[5817]: Accepted publickey for core from 10.0.0.1 port 53874 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:59:06.346177 sshd-session[5817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:59:06.352953 systemd-logind[1563]: New session 21 of user core. Mar 13 00:59:06.362722 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 13 00:59:06.531945 sshd[5822]: Connection closed by 10.0.0.1 port 53874 Mar 13 00:59:06.532720 sshd-session[5817]: pam_unix(sshd:session): session closed for user core Mar 13 00:59:06.537044 systemd[1]: sshd@20-10.0.0.147:22-10.0.0.1:53874.service: Deactivated successfully. Mar 13 00:59:06.539849 systemd[1]: session-21.scope: Deactivated successfully. Mar 13 00:59:06.542160 systemd-logind[1563]: Session 21 logged out. Waiting for processes to exit. Mar 13 00:59:06.544922 systemd-logind[1563]: Removed session 21. Mar 13 00:59:11.545287 systemd[1]: Started sshd@21-10.0.0.147:22-10.0.0.1:45034.service - OpenSSH per-connection server daemon (10.0.0.1:45034). Mar 13 00:59:11.617741 sshd[5841]: Accepted publickey for core from 10.0.0.1 port 45034 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:59:11.619639 sshd-session[5841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:59:11.625645 systemd-logind[1563]: New session 22 of user core. Mar 13 00:59:11.636733 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 13 00:59:11.740388 sshd[5844]: Connection closed by 10.0.0.1 port 45034 Mar 13 00:59:11.740863 sshd-session[5841]: pam_unix(sshd:session): session closed for user core Mar 13 00:59:11.746046 systemd[1]: sshd@21-10.0.0.147:22-10.0.0.1:45034.service: Deactivated successfully. Mar 13 00:59:11.748689 systemd[1]: session-22.scope: Deactivated successfully. Mar 13 00:59:11.749918 systemd-logind[1563]: Session 22 logged out. Waiting for processes to exit. Mar 13 00:59:11.751843 systemd-logind[1563]: Removed session 22. Mar 13 00:59:16.755038 systemd[1]: Started sshd@22-10.0.0.147:22-10.0.0.1:45038.service - OpenSSH per-connection server daemon (10.0.0.1:45038). Mar 13 00:59:16.819357 sshd[5857]: Accepted publickey for core from 10.0.0.1 port 45038 ssh2: RSA SHA256:Tj3wjrSJxcezcEKNOhNYW6ODk8vmuVpOeVbl+By0hNg Mar 13 00:59:16.821030 sshd-session[5857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:59:16.826862 systemd-logind[1563]: New session 23 of user core. Mar 13 00:59:16.836704 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 13 00:59:16.933400 sshd[5860]: Connection closed by 10.0.0.1 port 45038 Mar 13 00:59:16.933883 sshd-session[5857]: pam_unix(sshd:session): session closed for user core Mar 13 00:59:16.938720 systemd[1]: sshd@22-10.0.0.147:22-10.0.0.1:45038.service: Deactivated successfully. Mar 13 00:59:16.941040 systemd[1]: session-23.scope: Deactivated successfully. Mar 13 00:59:16.942536 systemd-logind[1563]: Session 23 logged out. Waiting for processes to exit. Mar 13 00:59:16.944139 systemd-logind[1563]: Removed session 23.