Mar 3 13:58:59.775806 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Mar 3 10:59:45 -00 2026 Mar 3 13:58:59.777652 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=51ade538e3d3c371f07ae1ec6fa9803fff0566ec060cf4b56dc685fc36d0e01c Mar 3 13:58:59.777666 kernel: BIOS-provided physical RAM map: Mar 3 13:58:59.777733 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 3 13:58:59.777739 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 3 13:58:59.777745 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 3 13:58:59.777752 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Mar 3 13:58:59.777758 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 3 13:58:59.777825 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Mar 3 13:58:59.777832 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Mar 3 13:58:59.777838 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Mar 3 13:58:59.777845 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Mar 3 13:58:59.777855 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Mar 3 13:58:59.777861 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Mar 3 13:58:59.777868 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Mar 3 13:58:59.777875 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 3 13:58:59.778396 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Mar 3 13:58:59.779050 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Mar 3 13:58:59.779059 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Mar 3 13:58:59.779065 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Mar 3 13:58:59.779071 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Mar 3 13:58:59.779078 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 3 13:58:59.779084 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 3 13:58:59.779090 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 3 13:58:59.779097 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Mar 3 13:58:59.779103 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 3 13:58:59.779110 kernel: NX (Execute Disable) protection: active Mar 3 13:58:59.779116 kernel: APIC: Static calls initialized Mar 3 13:58:59.779126 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Mar 3 13:58:59.779133 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Mar 3 13:58:59.779140 kernel: extended physical RAM map: Mar 3 13:58:59.779146 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 3 13:58:59.779153 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 3 13:58:59.779159 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 3 13:58:59.779165 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Mar 3 13:58:59.779172 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 3 13:58:59.779178 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Mar 3 13:58:59.779185 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Mar 3 13:58:59.779191 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Mar 3 13:58:59.779750 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Mar 3 13:58:59.785241 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Mar 3 13:58:59.785277 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Mar 3 13:58:59.785285 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Mar 3 13:58:59.785292 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Mar 3 13:58:59.785333 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Mar 3 13:58:59.785340 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Mar 3 13:58:59.785347 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Mar 3 13:58:59.785354 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 3 13:58:59.785360 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Mar 3 13:58:59.785367 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Mar 3 13:58:59.785374 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Mar 3 13:58:59.785381 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Mar 3 13:58:59.785387 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Mar 3 13:58:59.785394 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 3 13:58:59.785401 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 3 13:58:59.785410 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 3 13:58:59.785417 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Mar 3 13:58:59.785424 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 3 13:58:59.786227 kernel: efi: EFI v2.7 by EDK II Mar 3 13:58:59.786260 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Mar 3 13:58:59.788265 kernel: random: crng init done Mar 3 13:58:59.790345 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Mar 3 13:58:59.790898 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Mar 3 13:58:59.791008 kernel: secureboot: Secure boot disabled Mar 3 13:58:59.791018 kernel: SMBIOS 2.8 present. Mar 3 13:58:59.791024 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Mar 3 13:58:59.791068 kernel: DMI: Memory slots populated: 1/1 Mar 3 13:58:59.791076 kernel: Hypervisor detected: KVM Mar 3 13:58:59.791082 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Mar 3 13:58:59.791089 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 3 13:58:59.791096 kernel: kvm-clock: using sched offset of 24434966967 cycles Mar 3 13:58:59.791104 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 3 13:58:59.791111 kernel: tsc: Detected 2445.426 MHz processor Mar 3 13:58:59.791118 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 3 13:58:59.791125 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 3 13:58:59.791132 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Mar 3 13:58:59.791139 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 3 13:58:59.791149 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 3 13:58:59.791156 kernel: Using GB pages for direct mapping Mar 3 13:58:59.791163 kernel: ACPI: Early table checksum verification disabled Mar 3 13:58:59.791170 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Mar 3 13:58:59.791177 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 3 13:58:59.791184 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 3 13:58:59.791190 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 3 13:58:59.791197 kernel: ACPI: FACS 0x000000009CBDD000 000040 Mar 3 13:58:59.794047 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 3 13:58:59.799225 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 3 13:58:59.799235 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 3 13:58:59.799243 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 3 13:58:59.799249 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 3 13:58:59.799256 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Mar 3 13:58:59.799263 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Mar 3 13:58:59.799270 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Mar 3 13:58:59.799277 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Mar 3 13:58:59.799318 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Mar 3 13:58:59.799325 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Mar 3 13:58:59.799332 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Mar 3 13:58:59.799339 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Mar 3 13:58:59.799346 kernel: No NUMA configuration found Mar 3 13:58:59.799353 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Mar 3 13:58:59.799360 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Mar 3 13:58:59.799367 kernel: Zone ranges: Mar 3 13:58:59.799374 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 3 13:58:59.799384 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Mar 3 13:58:59.799391 kernel: Normal empty Mar 3 13:58:59.799398 kernel: Device empty Mar 3 13:58:59.799405 kernel: Movable zone start for each node Mar 3 13:58:59.799412 kernel: Early memory node ranges Mar 3 13:58:59.799419 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 3 13:58:59.799585 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Mar 3 13:58:59.799595 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Mar 3 13:58:59.799602 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Mar 3 13:58:59.799613 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Mar 3 13:58:59.799620 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Mar 3 13:58:59.799627 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Mar 3 13:58:59.799634 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Mar 3 13:58:59.799641 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Mar 3 13:58:59.799697 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 3 13:58:59.799714 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 3 13:58:59.799724 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Mar 3 13:58:59.799731 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 3 13:58:59.799738 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Mar 3 13:58:59.799746 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Mar 3 13:58:59.799753 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Mar 3 13:58:59.799762 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Mar 3 13:58:59.799769 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Mar 3 13:58:59.799777 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 3 13:58:59.799784 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 3 13:58:59.799791 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 3 13:58:59.799801 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 3 13:58:59.799808 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 3 13:58:59.799815 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 3 13:58:59.799822 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 3 13:58:59.799829 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 3 13:58:59.799837 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 3 13:58:59.799844 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 3 13:58:59.799851 kernel: TSC deadline timer available Mar 3 13:58:59.799858 kernel: CPU topo: Max. logical packages: 1 Mar 3 13:58:59.799868 kernel: CPU topo: Max. logical dies: 1 Mar 3 13:58:59.799875 kernel: CPU topo: Max. dies per package: 1 Mar 3 13:58:59.799882 kernel: CPU topo: Max. threads per core: 1 Mar 3 13:58:59.799889 kernel: CPU topo: Num. cores per package: 4 Mar 3 13:58:59.799896 kernel: CPU topo: Num. threads per package: 4 Mar 3 13:58:59.799903 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Mar 3 13:58:59.799910 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 3 13:58:59.799917 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 3 13:58:59.799924 kernel: kvm-guest: setup PV sched yield Mar 3 13:58:59.800003 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Mar 3 13:58:59.800014 kernel: Booting paravirtualized kernel on KVM Mar 3 13:58:59.800021 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 3 13:58:59.800029 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 3 13:58:59.800036 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Mar 3 13:58:59.800043 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Mar 3 13:58:59.800050 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 3 13:58:59.800057 kernel: kvm-guest: PV spinlocks enabled Mar 3 13:58:59.800064 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 3 13:58:59.800126 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=51ade538e3d3c371f07ae1ec6fa9803fff0566ec060cf4b56dc685fc36d0e01c Mar 3 13:58:59.800134 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 3 13:58:59.800141 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 3 13:58:59.800148 kernel: Fallback order for Node 0: 0 Mar 3 13:58:59.800156 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Mar 3 13:58:59.800163 kernel: Policy zone: DMA32 Mar 3 13:58:59.800170 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 3 13:58:59.800177 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 3 13:58:59.800184 kernel: ftrace: allocating 40099 entries in 157 pages Mar 3 13:58:59.800195 kernel: ftrace: allocated 157 pages with 5 groups Mar 3 13:58:59.800202 kernel: Dynamic Preempt: voluntary Mar 3 13:58:59.800209 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 3 13:58:59.800217 kernel: rcu: RCU event tracing is enabled. Mar 3 13:58:59.800225 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 3 13:58:59.800232 kernel: Trampoline variant of Tasks RCU enabled. Mar 3 13:58:59.800239 kernel: Rude variant of Tasks RCU enabled. Mar 3 13:58:59.800246 kernel: Tracing variant of Tasks RCU enabled. Mar 3 13:58:59.800253 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 3 13:58:59.800263 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 3 13:58:59.800803 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 3 13:58:59.800834 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 3 13:58:59.800842 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 3 13:58:59.800849 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 3 13:58:59.801333 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 3 13:58:59.801402 kernel: Console: colour dummy device 80x25 Mar 3 13:58:59.801417 kernel: printk: legacy console [ttyS0] enabled Mar 3 13:58:59.801430 kernel: ACPI: Core revision 20240827 Mar 3 13:58:59.802042 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 3 13:58:59.802052 kernel: APIC: Switch to symmetric I/O mode setup Mar 3 13:58:59.802059 kernel: x2apic enabled Mar 3 13:58:59.802067 kernel: APIC: Switched APIC routing to: physical x2apic Mar 3 13:58:59.802074 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 3 13:58:59.802081 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 3 13:58:59.802089 kernel: kvm-guest: setup PV IPIs Mar 3 13:58:59.802096 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 3 13:58:59.802103 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 3 13:58:59.802193 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 3 13:58:59.802200 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 3 13:58:59.802207 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 3 13:58:59.802215 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 3 13:58:59.802222 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 3 13:58:59.802229 kernel: Spectre V2 : Mitigation: Retpolines Mar 3 13:58:59.802236 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 3 13:58:59.802243 kernel: Speculative Store Bypass: Vulnerable Mar 3 13:58:59.802251 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 3 13:58:59.802261 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 3 13:58:59.802343 kernel: active return thunk: srso_alias_return_thunk Mar 3 13:58:59.802357 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 3 13:58:59.802369 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 3 13:58:59.802382 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 3 13:58:59.802394 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 3 13:58:59.802401 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 3 13:58:59.802409 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 3 13:58:59.802421 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 3 13:58:59.802429 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 3 13:58:59.802436 kernel: Freeing SMP alternatives memory: 32K Mar 3 13:58:59.802443 kernel: pid_max: default: 32768 minimum: 301 Mar 3 13:58:59.802451 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 3 13:58:59.802458 kernel: landlock: Up and running. Mar 3 13:58:59.803128 kernel: SELinux: Initializing. Mar 3 13:58:59.803152 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 3 13:58:59.803159 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 3 13:58:59.803189 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 3 13:58:59.803197 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 3 13:58:59.803204 kernel: signal: max sigframe size: 1776 Mar 3 13:58:59.803211 kernel: rcu: Hierarchical SRCU implementation. Mar 3 13:58:59.803219 kernel: rcu: Max phase no-delay instances is 400. Mar 3 13:58:59.803227 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 3 13:58:59.803234 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 3 13:58:59.803241 kernel: smp: Bringing up secondary CPUs ... Mar 3 13:58:59.803248 kernel: smpboot: x86: Booting SMP configuration: Mar 3 13:58:59.803260 kernel: .... node #0, CPUs: #1 #2 #3 Mar 3 13:58:59.803267 kernel: smp: Brought up 1 node, 4 CPUs Mar 3 13:58:59.803274 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 3 13:58:59.803282 kernel: Memory: 2414476K/2565800K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46200K init, 2560K bss, 145388K reserved, 0K cma-reserved) Mar 3 13:58:59.803289 kernel: devtmpfs: initialized Mar 3 13:58:59.803296 kernel: x86/mm: Memory block size: 128MB Mar 3 13:58:59.803304 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Mar 3 13:58:59.803311 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Mar 3 13:58:59.803318 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Mar 3 13:58:59.803395 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Mar 3 13:58:59.803402 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Mar 3 13:58:59.803409 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Mar 3 13:58:59.803417 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 3 13:58:59.803424 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 3 13:58:59.803431 kernel: pinctrl core: initialized pinctrl subsystem Mar 3 13:58:59.803438 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 3 13:58:59.803445 kernel: audit: initializing netlink subsys (disabled) Mar 3 13:58:59.803453 kernel: audit: type=2000 audit(1772546322.856:1): state=initialized audit_enabled=0 res=1 Mar 3 13:58:59.803462 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 3 13:58:59.803588 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 3 13:58:59.803595 kernel: cpuidle: using governor menu Mar 3 13:58:59.803602 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 3 13:58:59.803610 kernel: dca service started, version 1.12.1 Mar 3 13:58:59.803617 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Mar 3 13:58:59.803624 kernel: PCI: Using configuration type 1 for base access Mar 3 13:58:59.803631 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 3 13:58:59.803643 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 3 13:58:59.803651 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 3 13:58:59.803658 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 3 13:58:59.803666 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 3 13:58:59.803673 kernel: ACPI: Added _OSI(Module Device) Mar 3 13:58:59.803680 kernel: ACPI: Added _OSI(Processor Device) Mar 3 13:58:59.803687 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 3 13:58:59.803695 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 3 13:58:59.803702 kernel: ACPI: Interpreter enabled Mar 3 13:58:59.803711 kernel: ACPI: PM: (supports S0 S3 S5) Mar 3 13:58:59.803718 kernel: ACPI: Using IOAPIC for interrupt routing Mar 3 13:58:59.803726 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 3 13:58:59.803733 kernel: PCI: Using E820 reservations for host bridge windows Mar 3 13:58:59.803740 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 3 13:58:59.803747 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 3 13:58:59.804381 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 3 13:58:59.804675 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 3 13:58:59.804835 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 3 13:58:59.804846 kernel: PCI host bridge to bus 0000:00 Mar 3 13:58:59.806187 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 3 13:58:59.806345 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 3 13:58:59.806649 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 3 13:58:59.806785 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Mar 3 13:58:59.807022 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Mar 3 13:58:59.807166 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Mar 3 13:58:59.807296 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 3 13:58:59.807578 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Mar 3 13:58:59.807742 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Mar 3 13:58:59.807884 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Mar 3 13:58:59.808113 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Mar 3 13:58:59.808315 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Mar 3 13:58:59.808454 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 3 13:58:59.808754 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 10742 usecs Mar 3 13:58:59.810861 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Mar 3 13:58:59.811147 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Mar 3 13:58:59.811291 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Mar 3 13:58:59.811431 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Mar 3 13:58:59.811726 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Mar 3 13:58:59.811870 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Mar 3 13:58:59.812703 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Mar 3 13:58:59.812885 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Mar 3 13:58:59.813127 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Mar 3 13:58:59.813272 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Mar 3 13:58:59.813454 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Mar 3 13:58:59.813771 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Mar 3 13:58:59.813912 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Mar 3 13:58:59.814151 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Mar 3 13:58:59.814294 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 3 13:58:59.814624 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 19531 usecs Mar 3 13:58:59.814784 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Mar 3 13:58:59.814923 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Mar 3 13:58:59.815160 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Mar 3 13:58:59.815308 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Mar 3 13:58:59.815446 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Mar 3 13:58:59.815456 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 3 13:58:59.815576 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 3 13:58:59.815586 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 3 13:58:59.815593 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 3 13:58:59.815605 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 3 13:58:59.815613 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 3 13:58:59.815620 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 3 13:58:59.816225 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 3 13:58:59.816234 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 3 13:58:59.816242 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 3 13:58:59.816249 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 3 13:58:59.816256 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 3 13:58:59.816263 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 3 13:58:59.816290 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 3 13:58:59.816297 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 3 13:58:59.816305 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 3 13:58:59.816312 kernel: iommu: Default domain type: Translated Mar 3 13:58:59.816319 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 3 13:58:59.816326 kernel: efivars: Registered efivars operations Mar 3 13:58:59.816333 kernel: PCI: Using ACPI for IRQ routing Mar 3 13:58:59.816341 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 3 13:58:59.816348 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Mar 3 13:58:59.816358 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Mar 3 13:58:59.816365 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Mar 3 13:58:59.816371 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Mar 3 13:58:59.816378 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Mar 3 13:58:59.816385 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Mar 3 13:58:59.816393 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Mar 3 13:58:59.816400 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Mar 3 13:58:59.816764 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 3 13:58:59.817017 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 3 13:58:59.817171 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 3 13:58:59.817181 kernel: vgaarb: loaded Mar 3 13:58:59.817189 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 3 13:58:59.817196 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 3 13:58:59.817203 kernel: clocksource: Switched to clocksource kvm-clock Mar 3 13:58:59.817211 kernel: VFS: Disk quotas dquot_6.6.0 Mar 3 13:58:59.817218 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 3 13:58:59.817226 kernel: pnp: PnP ACPI init Mar 3 13:58:59.817413 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Mar 3 13:58:59.817458 kernel: pnp: PnP ACPI: found 6 devices Mar 3 13:58:59.817610 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 3 13:58:59.817619 kernel: NET: Registered PF_INET protocol family Mar 3 13:58:59.817627 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 3 13:58:59.817634 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 3 13:58:59.817661 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 3 13:58:59.817671 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 3 13:58:59.817681 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 3 13:58:59.817688 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 3 13:58:59.817696 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 3 13:58:59.817703 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 3 13:58:59.817711 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 3 13:58:59.817718 kernel: NET: Registered PF_XDP protocol family Mar 3 13:58:59.818388 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Mar 3 13:58:59.818837 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Mar 3 13:58:59.819669 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 3 13:58:59.819893 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 3 13:58:59.820118 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 3 13:58:59.820253 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Mar 3 13:58:59.820383 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Mar 3 13:58:59.820635 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Mar 3 13:58:59.820648 kernel: PCI: CLS 0 bytes, default 64 Mar 3 13:58:59.820656 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 3 13:58:59.820664 kernel: Initialise system trusted keyrings Mar 3 13:58:59.820681 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 3 13:58:59.820689 kernel: Key type asymmetric registered Mar 3 13:58:59.820696 kernel: Asymmetric key parser 'x509' registered Mar 3 13:58:59.820703 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 3 13:58:59.820711 kernel: io scheduler mq-deadline registered Mar 3 13:58:59.820718 kernel: io scheduler kyber registered Mar 3 13:58:59.820725 kernel: io scheduler bfq registered Mar 3 13:58:59.820733 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 3 13:58:59.820741 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 3 13:58:59.820752 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 3 13:58:59.820759 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 3 13:58:59.820766 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 3 13:58:59.820774 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 3 13:58:59.820781 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 3 13:58:59.820789 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 3 13:58:59.820799 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 3 13:58:59.821090 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 3 13:58:59.821108 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 3 13:58:59.821247 kernel: rtc_cmos 00:04: registered as rtc0 Mar 3 13:58:59.821382 kernel: rtc_cmos 00:04: setting system clock to 2026-03-03T13:58:57 UTC (1772546337) Mar 3 13:58:59.821635 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Mar 3 13:58:59.821647 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 3 13:58:59.821660 kernel: efifb: probing for efifb Mar 3 13:58:59.821668 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Mar 3 13:58:59.821676 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Mar 3 13:58:59.821683 kernel: efifb: scrolling: redraw Mar 3 13:58:59.821691 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 3 13:58:59.821698 kernel: Console: switching to colour frame buffer device 160x50 Mar 3 13:58:59.821706 kernel: fb0: EFI VGA frame buffer device Mar 3 13:58:59.821714 kernel: pstore: Using crash dump compression: deflate Mar 3 13:58:59.821721 kernel: pstore: Registered efi_pstore as persistent store backend Mar 3 13:58:59.821729 kernel: NET: Registered PF_INET6 protocol family Mar 3 13:58:59.821739 kernel: Segment Routing with IPv6 Mar 3 13:58:59.821746 kernel: In-situ OAM (IOAM) with IPv6 Mar 3 13:58:59.821754 kernel: NET: Registered PF_PACKET protocol family Mar 3 13:58:59.821764 kernel: Key type dns_resolver registered Mar 3 13:58:59.821772 kernel: IPI shorthand broadcast: enabled Mar 3 13:58:59.821780 kernel: sched_clock: Marking stable (14244060588, 2078918305)->(17143413595, -820434702) Mar 3 13:58:59.821787 kernel: registered taskstats version 1 Mar 3 13:58:59.821795 kernel: Loading compiled-in X.509 certificates Mar 3 13:58:59.821803 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: bf135b2a3d3664cc6742f4e1848867384c1e52f1' Mar 3 13:58:59.821813 kernel: Demotion targets for Node 0: null Mar 3 13:58:59.821823 kernel: Key type .fscrypt registered Mar 3 13:58:59.821830 kernel: Key type fscrypt-provisioning registered Mar 3 13:58:59.821838 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 3 13:58:59.821846 kernel: ima: Allocated hash algorithm: sha1 Mar 3 13:58:59.821853 kernel: ima: No architecture policies found Mar 3 13:58:59.821861 kernel: clk: Disabling unused clocks Mar 3 13:58:59.821868 kernel: Warning: unable to open an initial console. Mar 3 13:58:59.821879 kernel: Freeing unused kernel image (initmem) memory: 46200K Mar 3 13:58:59.821887 kernel: Write protecting the kernel read-only data: 40960k Mar 3 13:58:59.821895 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Mar 3 13:58:59.821902 kernel: Run /init as init process Mar 3 13:58:59.821909 kernel: with arguments: Mar 3 13:58:59.821917 kernel: /init Mar 3 13:58:59.821925 kernel: with environment: Mar 3 13:58:59.822010 kernel: HOME=/ Mar 3 13:58:59.822017 kernel: TERM=linux Mar 3 13:58:59.822078 systemd[1]: Successfully made /usr/ read-only. Mar 3 13:58:59.822094 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 3 13:58:59.822103 systemd[1]: Detected virtualization kvm. Mar 3 13:58:59.822111 systemd[1]: Detected architecture x86-64. Mar 3 13:58:59.822118 systemd[1]: Running in initrd. Mar 3 13:58:59.822126 systemd[1]: No hostname configured, using default hostname. Mar 3 13:58:59.822134 systemd[1]: Hostname set to . Mar 3 13:58:59.822144 systemd[1]: Initializing machine ID from VM UUID. Mar 3 13:58:59.822152 kernel: hrtimer: interrupt took 5424535 ns Mar 3 13:58:59.822160 systemd[1]: Queued start job for default target initrd.target. Mar 3 13:58:59.822168 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 3 13:58:59.822176 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 3 13:58:59.822185 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 3 13:58:59.822193 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 3 13:58:59.822201 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 3 13:58:59.822212 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 3 13:58:59.822221 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 3 13:58:59.822228 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 3 13:58:59.822236 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 3 13:58:59.822244 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 3 13:58:59.822252 systemd[1]: Reached target paths.target - Path Units. Mar 3 13:58:59.822260 systemd[1]: Reached target slices.target - Slice Units. Mar 3 13:58:59.822268 systemd[1]: Reached target swap.target - Swaps. Mar 3 13:58:59.822278 systemd[1]: Reached target timers.target - Timer Units. Mar 3 13:58:59.822286 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 3 13:58:59.822294 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 3 13:58:59.822301 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 3 13:58:59.822309 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 3 13:58:59.822317 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 3 13:58:59.822325 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 3 13:58:59.822332 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 3 13:58:59.822343 systemd[1]: Reached target sockets.target - Socket Units. Mar 3 13:58:59.822350 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 3 13:58:59.822359 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 3 13:58:59.822366 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 3 13:58:59.822375 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 3 13:58:59.822382 systemd[1]: Starting systemd-fsck-usr.service... Mar 3 13:58:59.822390 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 3 13:58:59.822398 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 3 13:58:59.822406 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 3 13:58:59.822417 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 3 13:58:59.822425 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 3 13:58:59.822433 systemd[1]: Finished systemd-fsck-usr.service. Mar 3 13:58:59.822441 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 3 13:58:59.822699 systemd-journald[204]: Collecting audit messages is disabled. Mar 3 13:58:59.822722 systemd-journald[204]: Journal started Mar 3 13:58:59.822794 systemd-journald[204]: Runtime Journal (/run/log/journal/43b2a80c5e7b47d5bcb80db3b0feac55) is 6M, max 48.1M, 42.1M free. Mar 3 13:58:59.831687 systemd[1]: Started systemd-journald.service - Journal Service. Mar 3 13:58:59.748082 systemd-modules-load[205]: Inserted module 'overlay' Mar 3 13:58:59.848128 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 3 13:59:00.386127 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1010082268 wd_nsec: 1010081808 Mar 3 13:59:00.424386 systemd-tmpfiles[216]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 3 13:59:00.449876 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 3 13:59:00.449910 kernel: Bridge firewalling registered Mar 3 13:59:00.427807 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 3 13:59:00.450914 systemd-modules-load[205]: Inserted module 'br_netfilter' Mar 3 13:59:00.485784 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 3 13:59:00.487059 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 13:59:00.516362 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 3 13:59:00.556164 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 3 13:59:00.576287 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 3 13:59:00.583779 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 3 13:59:00.637190 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 3 13:59:00.641799 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 3 13:59:00.666864 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 3 13:59:00.667667 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 3 13:59:00.708834 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 3 13:59:00.785097 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=51ade538e3d3c371f07ae1ec6fa9803fff0566ec060cf4b56dc685fc36d0e01c Mar 3 13:59:00.811610 systemd-resolved[244]: Positive Trust Anchors: Mar 3 13:59:00.812025 systemd-resolved[244]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 3 13:59:00.812063 systemd-resolved[244]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 3 13:59:00.823921 systemd-resolved[244]: Defaulting to hostname 'linux'. Mar 3 13:59:00.837114 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 3 13:59:00.854275 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 3 13:59:01.105115 kernel: SCSI subsystem initialized Mar 3 13:59:01.121735 kernel: Loading iSCSI transport class v2.0-870. Mar 3 13:59:01.144792 kernel: iscsi: registered transport (tcp) Mar 3 13:59:01.197723 kernel: iscsi: registered transport (qla4xxx) Mar 3 13:59:01.198166 kernel: QLogic iSCSI HBA Driver Mar 3 13:59:01.282652 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 3 13:59:01.362757 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 3 13:59:01.366448 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 3 13:59:02.029390 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 3 13:59:02.041923 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 3 13:59:02.190200 kernel: raid6: avx2x4 gen() 27938 MB/s Mar 3 13:59:02.211219 kernel: raid6: avx2x2 gen() 18152 MB/s Mar 3 13:59:02.236036 kernel: raid6: avx2x1 gen() 15802 MB/s Mar 3 13:59:02.236351 kernel: raid6: using algorithm avx2x4 gen() 27938 MB/s Mar 3 13:59:02.261420 kernel: raid6: .... xor() 3728 MB/s, rmw enabled Mar 3 13:59:02.261722 kernel: raid6: using avx2x2 recovery algorithm Mar 3 13:59:02.309813 kernel: xor: automatically using best checksumming function avx Mar 3 13:59:02.667243 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 3 13:59:02.707760 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 3 13:59:02.713607 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 3 13:59:02.795148 systemd-udevd[455]: Using default interface naming scheme 'v255'. Mar 3 13:59:02.814154 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 3 13:59:02.816079 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 3 13:59:02.935322 dracut-pre-trigger[457]: rd.md=0: removing MD RAID activation Mar 3 13:59:03.084244 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 3 13:59:03.095277 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 3 13:59:03.287646 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 3 13:59:03.315150 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 3 13:59:03.450932 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 3 13:59:03.460816 kernel: cryptd: max_cpu_qlen set to 1000 Mar 3 13:59:03.483670 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 3 13:59:03.537921 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 3 13:59:03.538100 kernel: GPT:9289727 != 19775487 Mar 3 13:59:03.538115 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 3 13:59:03.550782 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Mar 3 13:59:03.550836 kernel: GPT:9289727 != 19775487 Mar 3 13:59:03.557908 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 3 13:59:03.561750 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 3 13:59:03.562371 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 3 13:59:03.563204 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 13:59:03.602274 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 3 13:59:03.613787 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 3 13:59:03.620718 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 3 13:59:03.665441 kernel: AES CTR mode by8 optimization enabled Mar 3 13:59:03.665616 kernel: libata version 3.00 loaded. Mar 3 13:59:03.712648 kernel: ahci 0000:00:1f.2: version 3.0 Mar 3 13:59:03.718818 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 3 13:59:03.740330 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Mar 3 13:59:03.740670 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Mar 3 13:59:03.740863 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 3 13:59:03.748365 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 13:59:03.771083 kernel: scsi host0: ahci Mar 3 13:59:03.783936 kernel: scsi host1: ahci Mar 3 13:59:03.831770 kernel: scsi host2: ahci Mar 3 13:59:03.832330 kernel: scsi host3: ahci Mar 3 13:59:03.832906 kernel: scsi host4: ahci Mar 3 13:59:03.833171 kernel: scsi host5: ahci Mar 3 13:59:03.833340 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Mar 3 13:59:03.833351 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Mar 3 13:59:03.833362 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Mar 3 13:59:03.833411 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Mar 3 13:59:03.784067 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 3 13:59:03.865359 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Mar 3 13:59:03.865385 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Mar 3 13:59:03.865692 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 3 13:59:03.921389 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 3 13:59:03.942084 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 3 13:59:03.952697 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 3 13:59:03.994245 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 3 13:59:04.089179 disk-uuid[620]: Primary Header is updated. Mar 3 13:59:04.089179 disk-uuid[620]: Secondary Entries is updated. Mar 3 13:59:04.089179 disk-uuid[620]: Secondary Header is updated. Mar 3 13:59:04.140764 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 3 13:59:04.158727 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 3 13:59:04.182744 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 3 13:59:04.209700 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 3 13:59:04.216636 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 3 13:59:04.224816 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 3 13:59:04.234073 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 3 13:59:04.254306 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 3 13:59:04.254350 kernel: ata3.00: LPM support broken, forcing max_power Mar 3 13:59:04.254363 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 3 13:59:04.254380 kernel: ata3.00: applying bridge limits Mar 3 13:59:04.267687 kernel: ata3.00: LPM support broken, forcing max_power Mar 3 13:59:04.267728 kernel: ata3.00: configured for UDMA/100 Mar 3 13:59:04.280618 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 3 13:59:04.368148 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 3 13:59:04.368631 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 3 13:59:04.397677 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 3 13:59:05.008851 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 3 13:59:05.030121 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 3 13:59:05.053431 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 3 13:59:05.070154 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 3 13:59:05.103945 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 3 13:59:05.163677 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 3 13:59:05.168417 disk-uuid[621]: The operation has completed successfully. Mar 3 13:59:05.171745 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 3 13:59:05.263060 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 3 13:59:05.263302 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 3 13:59:05.326052 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 3 13:59:05.379925 sh[650]: Success Mar 3 13:59:05.480262 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 3 13:59:05.480720 kernel: device-mapper: uevent: version 1.0.3 Mar 3 13:59:05.500171 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 3 13:59:05.551761 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Mar 3 13:59:05.692209 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 3 13:59:05.698761 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 3 13:59:05.746764 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 3 13:59:05.787592 kernel: BTRFS: device fsid f550cb98-648e-4600-9237-4b15eb09827b devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (662) Mar 3 13:59:05.787630 kernel: BTRFS info (device dm-0): first mount of filesystem f550cb98-648e-4600-9237-4b15eb09827b Mar 3 13:59:05.787647 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 3 13:59:05.821330 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 3 13:59:05.821416 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 3 13:59:05.824705 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 3 13:59:05.825627 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 3 13:59:05.837624 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 3 13:59:05.839083 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 3 13:59:05.892365 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 3 13:59:05.981904 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (695) Mar 3 13:59:05.993639 kernel: BTRFS info (device vda6): first mount of filesystem af9be1e8-b0f0-42a3-a696-521642a3b9f8 Mar 3 13:59:05.993685 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 3 13:59:06.042331 kernel: BTRFS info (device vda6): turning on async discard Mar 3 13:59:06.042439 kernel: BTRFS info (device vda6): enabling free space tree Mar 3 13:59:06.070087 kernel: BTRFS info (device vda6): last unmount of filesystem af9be1e8-b0f0-42a3-a696-521642a3b9f8 Mar 3 13:59:06.080386 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 3 13:59:06.104947 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 3 13:59:06.652314 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 3 13:59:06.688461 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 3 13:59:07.086406 systemd-networkd[832]: lo: Link UP Mar 3 13:59:07.086607 systemd-networkd[832]: lo: Gained carrier Mar 3 13:59:07.124124 systemd-networkd[832]: Enumeration completed Mar 3 13:59:07.126871 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 3 13:59:07.129417 systemd-networkd[832]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 3 13:59:07.129424 systemd-networkd[832]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 3 13:59:07.142457 systemd[1]: Reached target network.target - Network. Mar 3 13:59:07.145729 systemd-networkd[832]: eth0: Link UP Mar 3 13:59:07.146624 systemd-networkd[832]: eth0: Gained carrier Mar 3 13:59:07.146641 systemd-networkd[832]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 3 13:59:07.713917 systemd-networkd[832]: eth0: DHCPv4 address 10.0.0.115/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 3 13:59:08.206817 systemd-resolved[244]: Detected conflict on linux IN A 10.0.0.115 Mar 3 13:59:08.206981 systemd-resolved[244]: Hostname conflict, changing published hostname from 'linux' to 'linux9'. Mar 3 13:59:08.283668 ignition[749]: Ignition 2.22.0 Mar 3 13:59:08.283761 ignition[749]: Stage: fetch-offline Mar 3 13:59:08.283969 ignition[749]: no configs at "/usr/lib/ignition/base.d" Mar 3 13:59:08.284071 ignition[749]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 3 13:59:08.284788 ignition[749]: parsed url from cmdline: "" Mar 3 13:59:08.284796 ignition[749]: no config URL provided Mar 3 13:59:08.284891 ignition[749]: reading system config file "/usr/lib/ignition/user.ign" Mar 3 13:59:08.284913 ignition[749]: no config at "/usr/lib/ignition/user.ign" Mar 3 13:59:08.285361 ignition[749]: op(1): [started] loading QEMU firmware config module Mar 3 13:59:08.285372 ignition[749]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 3 13:59:08.330677 ignition[749]: op(1): [finished] loading QEMU firmware config module Mar 3 13:59:08.389310 systemd-networkd[832]: eth0: Gained IPv6LL Mar 3 13:59:09.389128 ignition[749]: parsing config with SHA512: ca879494ca710f2236da7a59870d85cd0fae117da86c4f2de210927456d2f434acd0f236abbc734b904db9505f6d5e7861c8b8143773026c6d8d01231e155e12 Mar 3 13:59:09.535122 unknown[749]: fetched base config from "system" Mar 3 13:59:09.535217 unknown[749]: fetched user config from "qemu" Mar 3 13:59:09.555916 ignition[749]: fetch-offline: fetch-offline passed Mar 3 13:59:09.556206 ignition[749]: Ignition finished successfully Mar 3 13:59:09.560750 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 3 13:59:09.577785 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 3 13:59:09.584390 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 3 13:59:09.818806 ignition[845]: Ignition 2.22.0 Mar 3 13:59:09.818884 ignition[845]: Stage: kargs Mar 3 13:59:09.819752 ignition[845]: no configs at "/usr/lib/ignition/base.d" Mar 3 13:59:09.819775 ignition[845]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 3 13:59:09.827825 ignition[845]: kargs: kargs passed Mar 3 13:59:09.827891 ignition[845]: Ignition finished successfully Mar 3 13:59:09.865272 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 3 13:59:09.885209 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 3 13:59:10.001848 ignition[853]: Ignition 2.22.0 Mar 3 13:59:10.001941 ignition[853]: Stage: disks Mar 3 13:59:10.006983 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 3 13:59:10.002213 ignition[853]: no configs at "/usr/lib/ignition/base.d" Mar 3 13:59:10.016898 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 3 13:59:10.002229 ignition[853]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 3 13:59:10.029841 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 3 13:59:10.003412 ignition[853]: disks: disks passed Mar 3 13:59:10.040148 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 3 13:59:10.003638 ignition[853]: Ignition finished successfully Mar 3 13:59:10.048835 systemd[1]: Reached target sysinit.target - System Initialization. Mar 3 13:59:10.058084 systemd[1]: Reached target basic.target - Basic System. Mar 3 13:59:10.068201 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 3 13:59:10.149933 systemd-fsck[863]: ROOT: clean, 15/553520 files, 52789/553472 blocks Mar 3 13:59:10.159681 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 3 13:59:10.191194 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 3 13:59:10.574627 kernel: EXT4-fs (vda9): mounted filesystem f0c751de-febc-4e57-b330-c926d38ed5ec r/w with ordered data mode. Quota mode: none. Mar 3 13:59:10.576639 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 3 13:59:10.577831 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 3 13:59:10.607875 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 3 13:59:10.618714 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 3 13:59:10.634452 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 3 13:59:10.634740 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 3 13:59:10.634774 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 3 13:59:10.736148 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (872) Mar 3 13:59:10.736186 kernel: BTRFS info (device vda6): first mount of filesystem af9be1e8-b0f0-42a3-a696-521642a3b9f8 Mar 3 13:59:10.736203 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 3 13:59:10.723921 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 3 13:59:10.755383 kernel: BTRFS info (device vda6): turning on async discard Mar 3 13:59:10.755403 kernel: BTRFS info (device vda6): enabling free space tree Mar 3 13:59:10.752145 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 3 13:59:10.776708 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 3 13:59:10.914254 initrd-setup-root[896]: cut: /sysroot/etc/passwd: No such file or directory Mar 3 13:59:10.936235 initrd-setup-root[903]: cut: /sysroot/etc/group: No such file or directory Mar 3 13:59:10.956884 initrd-setup-root[910]: cut: /sysroot/etc/shadow: No such file or directory Mar 3 13:59:10.975935 initrd-setup-root[917]: cut: /sysroot/etc/gshadow: No such file or directory Mar 3 13:59:11.269452 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 3 13:59:11.271963 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 3 13:59:11.306112 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 3 13:59:11.325439 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 3 13:59:11.343767 kernel: BTRFS info (device vda6): last unmount of filesystem af9be1e8-b0f0-42a3-a696-521642a3b9f8 Mar 3 13:59:11.398613 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 3 13:59:11.427285 ignition[984]: INFO : Ignition 2.22.0 Mar 3 13:59:11.427285 ignition[984]: INFO : Stage: mount Mar 3 13:59:11.439116 ignition[984]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 3 13:59:11.439116 ignition[984]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 3 13:59:11.439116 ignition[984]: INFO : mount: mount passed Mar 3 13:59:11.439116 ignition[984]: INFO : Ignition finished successfully Mar 3 13:59:11.441388 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 3 13:59:11.447749 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 3 13:59:11.579277 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 3 13:59:11.618653 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (998) Mar 3 13:59:11.633225 kernel: BTRFS info (device vda6): first mount of filesystem af9be1e8-b0f0-42a3-a696-521642a3b9f8 Mar 3 13:59:11.633271 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 3 13:59:11.659917 kernel: BTRFS info (device vda6): turning on async discard Mar 3 13:59:11.659950 kernel: BTRFS info (device vda6): enabling free space tree Mar 3 13:59:11.662894 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 3 13:59:11.756110 ignition[1015]: INFO : Ignition 2.22.0 Mar 3 13:59:11.756110 ignition[1015]: INFO : Stage: files Mar 3 13:59:11.766220 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 3 13:59:11.766220 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 3 13:59:11.766220 ignition[1015]: DEBUG : files: compiled without relabeling support, skipping Mar 3 13:59:11.766220 ignition[1015]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 3 13:59:11.766220 ignition[1015]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 3 13:59:11.814353 ignition[1015]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 3 13:59:11.824806 ignition[1015]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 3 13:59:11.835690 unknown[1015]: wrote ssh authorized keys file for user: core Mar 3 13:59:11.843629 ignition[1015]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 3 13:59:11.852922 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 3 13:59:11.852922 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 3 13:59:11.947674 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 3 13:59:12.098128 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 3 13:59:12.098128 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 3 13:59:12.130195 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 3 13:59:12.130195 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 3 13:59:12.130195 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 3 13:59:12.130195 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 3 13:59:12.130195 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 3 13:59:12.130195 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 3 13:59:12.130195 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 3 13:59:12.130195 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 3 13:59:12.130195 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 3 13:59:12.130195 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 3 13:59:12.130195 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 3 13:59:12.130195 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 3 13:59:12.130195 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Mar 3 13:59:12.433108 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 3 13:59:12.914829 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 3 13:59:12.914829 ignition[1015]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 3 13:59:12.944176 ignition[1015]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 3 13:59:12.944176 ignition[1015]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 3 13:59:12.944176 ignition[1015]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 3 13:59:12.944176 ignition[1015]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 3 13:59:12.944176 ignition[1015]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 3 13:59:12.944176 ignition[1015]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 3 13:59:12.944176 ignition[1015]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 3 13:59:12.944176 ignition[1015]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 3 13:59:13.051449 ignition[1015]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 3 13:59:13.066673 ignition[1015]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 3 13:59:13.066673 ignition[1015]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 3 13:59:13.066673 ignition[1015]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 3 13:59:13.066673 ignition[1015]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 3 13:59:13.066673 ignition[1015]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 3 13:59:13.066673 ignition[1015]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 3 13:59:13.066673 ignition[1015]: INFO : files: files passed Mar 3 13:59:13.066673 ignition[1015]: INFO : Ignition finished successfully Mar 3 13:59:13.133174 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 3 13:59:13.169381 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 3 13:59:13.187903 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 3 13:59:13.234416 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 3 13:59:13.234818 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 3 13:59:13.267975 initrd-setup-root-after-ignition[1044]: grep: /sysroot/oem/oem-release: No such file or directory Mar 3 13:59:13.287214 initrd-setup-root-after-ignition[1046]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 3 13:59:13.287214 initrd-setup-root-after-ignition[1046]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 3 13:59:13.321647 initrd-setup-root-after-ignition[1050]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 3 13:59:13.340183 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 3 13:59:13.353843 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 3 13:59:13.390238 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 3 13:59:13.533864 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 3 13:59:13.534231 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 3 13:59:13.552649 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 3 13:59:13.571364 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 3 13:59:13.580811 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 3 13:59:13.582891 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 3 13:59:13.668959 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 3 13:59:13.671898 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 3 13:59:13.734182 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 3 13:59:13.745223 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 3 13:59:13.756375 systemd[1]: Stopped target timers.target - Timer Units. Mar 3 13:59:13.792934 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 3 13:59:13.793375 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 3 13:59:13.812781 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 3 13:59:13.823828 systemd[1]: Stopped target basic.target - Basic System. Mar 3 13:59:13.839026 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 3 13:59:13.871773 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 3 13:59:13.883194 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 3 13:59:13.903861 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 3 13:59:13.916012 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 3 13:59:13.960014 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 3 13:59:13.973338 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 3 13:59:13.996652 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 3 13:59:14.009451 systemd[1]: Stopped target swap.target - Swaps. Mar 3 13:59:14.023761 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 3 13:59:14.023932 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 3 13:59:14.048997 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 3 13:59:14.063313 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 3 13:59:14.077870 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 3 13:59:14.100704 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 3 13:59:14.101170 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 3 13:59:14.101397 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 3 13:59:14.136385 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 3 13:59:14.136979 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 3 13:59:14.149807 systemd[1]: Stopped target paths.target - Path Units. Mar 3 13:59:14.164235 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 3 13:59:14.179310 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 3 13:59:14.203346 systemd[1]: Stopped target slices.target - Slice Units. Mar 3 13:59:14.227911 systemd[1]: Stopped target sockets.target - Socket Units. Mar 3 13:59:14.235789 systemd[1]: iscsid.socket: Deactivated successfully. Mar 3 13:59:14.235892 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 3 13:59:14.243299 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 3 13:59:14.243423 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 3 13:59:14.258252 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 3 13:59:14.258427 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 3 13:59:14.274454 systemd[1]: ignition-files.service: Deactivated successfully. Mar 3 13:59:14.274807 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 3 13:59:14.309414 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 3 13:59:14.316869 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 3 13:59:14.317211 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 3 13:59:14.398811 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 3 13:59:14.399216 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 3 13:59:14.399802 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 3 13:59:14.439284 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 3 13:59:14.439807 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 3 13:59:14.469452 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 3 13:59:14.483328 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 3 13:59:14.483740 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 3 13:59:14.514235 ignition[1071]: INFO : Ignition 2.22.0 Mar 3 13:59:14.514235 ignition[1071]: INFO : Stage: umount Mar 3 13:59:14.514235 ignition[1071]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 3 13:59:14.514235 ignition[1071]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 3 13:59:14.514235 ignition[1071]: INFO : umount: umount passed Mar 3 13:59:14.514235 ignition[1071]: INFO : Ignition finished successfully Mar 3 13:59:14.504889 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 3 13:59:14.505190 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 3 13:59:14.524427 systemd[1]: Stopped target network.target - Network. Mar 3 13:59:14.541371 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 3 13:59:14.541684 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 3 13:59:14.557023 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 3 13:59:14.557221 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 3 13:59:14.557459 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 3 13:59:14.557717 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 3 13:59:14.574727 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 3 13:59:14.574808 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 3 13:59:14.602864 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 3 13:59:14.610162 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 3 13:59:14.626843 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 3 13:59:14.626965 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 3 13:59:14.643148 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 3 13:59:14.643284 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 3 13:59:14.788252 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 3 13:59:14.788920 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 3 13:59:14.817352 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 3 13:59:14.817850 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 3 13:59:14.818164 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 3 13:59:14.854980 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 3 13:59:14.857275 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 3 13:59:14.883727 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 3 13:59:14.883885 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 3 13:59:14.903939 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 3 13:59:14.918820 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 3 13:59:14.918928 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 3 13:59:14.929935 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 3 13:59:14.930019 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 3 13:59:14.955253 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 3 13:59:14.955353 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 3 13:59:14.976009 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 3 13:59:14.976229 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 3 13:59:15.023993 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 3 13:59:15.044314 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 3 13:59:15.044427 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 3 13:59:15.089916 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 3 13:59:15.090631 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 3 13:59:15.114226 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 3 13:59:15.114349 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 3 13:59:15.126631 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 3 13:59:15.126724 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 3 13:59:15.147739 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 3 13:59:15.147868 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 3 13:59:15.187776 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 3 13:59:15.187878 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 3 13:59:15.216336 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 3 13:59:15.216439 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 3 13:59:15.257248 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 3 13:59:15.267207 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 3 13:59:15.267297 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 3 13:59:15.325841 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 3 13:59:15.326025 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 3 13:59:15.350401 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 3 13:59:15.350761 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 13:59:15.398705 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Mar 3 13:59:15.398917 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 3 13:59:15.398982 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 3 13:59:15.399835 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 3 13:59:15.400156 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 3 13:59:15.414180 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 3 13:59:15.414410 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 3 13:59:15.490920 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 3 13:59:15.504247 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 3 13:59:15.561789 systemd[1]: Switching root. Mar 3 13:59:15.625163 systemd-journald[204]: Journal stopped Mar 3 13:59:18.789013 systemd-journald[204]: Received SIGTERM from PID 1 (systemd). Mar 3 13:59:18.789205 kernel: SELinux: policy capability network_peer_controls=1 Mar 3 13:59:18.789226 kernel: SELinux: policy capability open_perms=1 Mar 3 13:59:18.789241 kernel: SELinux: policy capability extended_socket_class=1 Mar 3 13:59:18.789255 kernel: SELinux: policy capability always_check_network=0 Mar 3 13:59:18.789269 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 3 13:59:18.789296 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 3 13:59:18.789314 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 3 13:59:18.789334 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 3 13:59:18.789348 kernel: SELinux: policy capability userspace_initial_context=0 Mar 3 13:59:18.789363 kernel: audit: type=1403 audit(1772546356.009:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 3 13:59:18.789383 systemd[1]: Successfully loaded SELinux policy in 162.329ms. Mar 3 13:59:18.789415 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 19.273ms. Mar 3 13:59:18.789433 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 3 13:59:18.789448 systemd[1]: Detected virtualization kvm. Mar 3 13:59:18.789635 systemd[1]: Detected architecture x86-64. Mar 3 13:59:18.789660 systemd[1]: Detected first boot. Mar 3 13:59:18.789690 systemd[1]: Initializing machine ID from VM UUID. Mar 3 13:59:18.789706 zram_generator::config[1117]: No configuration found. Mar 3 13:59:18.789722 kernel: Guest personality initialized and is inactive Mar 3 13:59:18.789737 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Mar 3 13:59:18.789754 kernel: Initialized host personality Mar 3 13:59:18.789771 kernel: NET: Registered PF_VSOCK protocol family Mar 3 13:59:18.789786 systemd[1]: Populated /etc with preset unit settings. Mar 3 13:59:18.789803 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 3 13:59:18.789823 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 3 13:59:18.789839 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 3 13:59:18.789858 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 3 13:59:18.789876 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 3 13:59:18.789901 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 3 13:59:18.789917 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 3 13:59:18.789934 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 3 13:59:18.789952 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 3 13:59:18.789968 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 3 13:59:18.789988 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 3 13:59:18.790004 systemd[1]: Created slice user.slice - User and Session Slice. Mar 3 13:59:18.790020 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 3 13:59:18.790039 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 3 13:59:18.790055 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 3 13:59:18.790173 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 3 13:59:18.790193 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 3 13:59:18.790214 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 3 13:59:18.790232 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 3 13:59:18.790248 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 3 13:59:18.790264 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 3 13:59:18.790280 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 3 13:59:18.790295 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 3 13:59:18.790310 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 3 13:59:18.790329 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 3 13:59:18.790347 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 3 13:59:18.790367 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 3 13:59:18.790382 systemd[1]: Reached target slices.target - Slice Units. Mar 3 13:59:18.790397 systemd[1]: Reached target swap.target - Swaps. Mar 3 13:59:18.790417 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 3 13:59:18.790433 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 3 13:59:18.790448 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 3 13:59:18.790639 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 3 13:59:18.790672 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 3 13:59:18.790689 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 3 13:59:18.790704 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 3 13:59:18.790724 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 3 13:59:18.790739 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 3 13:59:18.790758 systemd[1]: Mounting media.mount - External Media Directory... Mar 3 13:59:18.790776 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 3 13:59:18.790791 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 3 13:59:18.790807 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 3 13:59:18.790822 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 3 13:59:18.790838 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 3 13:59:18.790862 systemd[1]: Reached target machines.target - Containers. Mar 3 13:59:18.790878 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 3 13:59:18.790895 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 3 13:59:18.790911 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 3 13:59:18.790927 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 3 13:59:18.790946 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 3 13:59:18.790961 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 3 13:59:18.790977 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 3 13:59:18.790992 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 3 13:59:18.791011 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 3 13:59:18.791031 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 3 13:59:18.791047 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 3 13:59:18.791062 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 3 13:59:18.791184 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 3 13:59:18.791201 systemd[1]: Stopped systemd-fsck-usr.service. Mar 3 13:59:18.791219 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 3 13:59:18.791237 kernel: ACPI: bus type drm_connector registered Mar 3 13:59:18.791258 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 3 13:59:18.791273 kernel: loop: module loaded Mar 3 13:59:18.791287 kernel: fuse: init (API version 7.41) Mar 3 13:59:18.791302 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 3 13:59:18.791322 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 3 13:59:18.791378 systemd-journald[1202]: Collecting audit messages is disabled. Mar 3 13:59:18.791411 systemd-journald[1202]: Journal started Mar 3 13:59:18.791700 systemd-journald[1202]: Runtime Journal (/run/log/journal/43b2a80c5e7b47d5bcb80db3b0feac55) is 6M, max 48.1M, 42.1M free. Mar 3 13:59:17.381213 systemd[1]: Queued start job for default target multi-user.target. Mar 3 13:59:17.412029 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 3 13:59:17.414041 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 3 13:59:17.414925 systemd[1]: systemd-journald.service: Consumed 2.731s CPU time. Mar 3 13:59:18.824996 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 3 13:59:18.844831 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 3 13:59:18.879788 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 3 13:59:18.893813 systemd[1]: verity-setup.service: Deactivated successfully. Mar 3 13:59:18.917835 systemd[1]: Stopped verity-setup.service. Mar 3 13:59:18.917935 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 3 13:59:18.959165 systemd[1]: Started systemd-journald.service - Journal Service. Mar 3 13:59:18.960647 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 3 13:59:18.972750 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 3 13:59:18.985674 systemd[1]: Mounted media.mount - External Media Directory. Mar 3 13:59:18.997924 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 3 13:59:19.010689 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 3 13:59:19.023170 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 3 13:59:19.036324 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 3 13:59:19.050215 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 3 13:59:19.064356 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 3 13:59:19.065158 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 3 13:59:19.078212 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 3 13:59:19.079310 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 3 13:59:19.092775 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 3 13:59:19.093982 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 3 13:59:19.108015 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 3 13:59:19.109164 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 3 13:59:19.123283 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 3 13:59:19.123999 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 3 13:59:19.136405 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 3 13:59:19.137955 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 3 13:59:19.150823 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 3 13:59:19.164156 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 3 13:59:19.178461 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 3 13:59:19.193274 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 3 13:59:19.207882 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 3 13:59:19.243828 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 3 13:59:19.257925 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 3 13:59:19.284005 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 3 13:59:19.296980 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 3 13:59:19.297222 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 3 13:59:19.310808 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 3 13:59:19.332439 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 3 13:59:19.344425 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 3 13:59:19.348012 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 3 13:59:19.363790 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 3 13:59:19.377385 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 3 13:59:19.385245 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 3 13:59:19.401735 systemd-journald[1202]: Time spent on flushing to /var/log/journal/43b2a80c5e7b47d5bcb80db3b0feac55 is 1.581358s for 1063 entries. Mar 3 13:59:19.401735 systemd-journald[1202]: System Journal (/var/log/journal/43b2a80c5e7b47d5bcb80db3b0feac55) is 8M, max 195.6M, 187.6M free. Mar 3 13:59:21.037729 systemd-journald[1202]: Received client request to flush runtime journal. Mar 3 13:59:19.399738 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 3 13:59:19.417435 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 3 13:59:20.967665 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 3 13:59:21.056688 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 3 13:59:21.076872 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 3 13:59:21.111645 kernel: loop0: detected capacity change from 0 to 110984 Mar 3 13:59:21.980757 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 3 13:59:22.003450 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 3 13:59:22.033007 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 3 13:59:22.099784 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 3 13:59:22.123390 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 3 13:59:22.146788 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 3 13:59:22.166670 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 3 13:59:22.242727 kernel: loop1: detected capacity change from 0 to 128560 Mar 3 13:59:22.476376 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 3 13:59:22.501836 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 3 13:59:22.543948 kernel: loop2: detected capacity change from 0 to 217752 Mar 3 13:59:22.563674 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 3 13:59:22.570995 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 3 13:59:22.636347 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Mar 3 13:59:22.636374 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Mar 3 13:59:22.656322 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 3 13:59:22.693775 kernel: loop3: detected capacity change from 0 to 110984 Mar 3 13:59:22.984829 kernel: loop4: detected capacity change from 0 to 128560 Mar 3 13:59:23.321681 kernel: loop5: detected capacity change from 0 to 217752 Mar 3 13:59:23.382305 (sd-merge)[1261]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 3 13:59:23.386796 (sd-merge)[1261]: Merged extensions into '/usr'. Mar 3 13:59:23.413426 systemd[1]: Reload requested from client PID 1237 ('systemd-sysext') (unit systemd-sysext.service)... Mar 3 13:59:23.419272 systemd[1]: Reloading... Mar 3 13:59:25.238699 zram_generator::config[1286]: No configuration found. Mar 3 13:59:27.184956 systemd[1]: Reloading finished in 3764 ms. Mar 3 13:59:27.248667 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 3 13:59:27.278736 systemd[1]: Starting ensure-sysext.service... Mar 3 13:59:27.291917 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 3 13:59:27.375663 ldconfig[1232]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 3 13:59:27.378974 systemd[1]: Reload requested from client PID 1323 ('systemctl') (unit ensure-sysext.service)... Mar 3 13:59:27.379088 systemd[1]: Reloading... Mar 3 13:59:27.581644 systemd-tmpfiles[1324]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 3 13:59:27.582090 systemd-tmpfiles[1324]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 3 13:59:27.583808 systemd-tmpfiles[1324]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 3 13:59:27.584448 systemd-tmpfiles[1324]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 3 13:59:27.587266 systemd-tmpfiles[1324]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 3 13:59:27.588004 systemd-tmpfiles[1324]: ACLs are not supported, ignoring. Mar 3 13:59:27.588338 systemd-tmpfiles[1324]: ACLs are not supported, ignoring. Mar 3 13:59:27.619219 systemd-tmpfiles[1324]: Detected autofs mount point /boot during canonicalization of boot. Mar 3 13:59:27.619319 systemd-tmpfiles[1324]: Skipping /boot Mar 3 13:59:27.661680 zram_generator::config[1347]: No configuration found. Mar 3 13:59:27.673702 systemd-tmpfiles[1324]: Detected autofs mount point /boot during canonicalization of boot. Mar 3 13:59:27.673726 systemd-tmpfiles[1324]: Skipping /boot Mar 3 13:59:28.625821 systemd[1]: Reloading finished in 1245 ms. Mar 3 13:59:28.653892 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 3 13:59:28.668826 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 3 13:59:28.698001 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 3 13:59:28.742323 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 3 13:59:28.756706 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 3 13:59:28.787110 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 3 13:59:28.813351 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 3 13:59:28.855879 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 3 13:59:28.873048 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 3 13:59:28.969431 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 3 13:59:28.988773 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 3 13:59:28.990028 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 3 13:59:29.015365 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 3 13:59:29.052343 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 3 13:59:29.114396 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 3 13:59:29.131715 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 3 13:59:29.150031 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 3 13:59:29.151011 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 3 13:59:29.160424 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 3 13:59:29.161371 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 3 13:59:29.185857 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 3 13:59:29.191313 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 3 13:59:29.224138 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 3 13:59:29.237861 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 3 13:59:29.277251 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 3 13:59:29.310360 augenrules[1421]: No rules Mar 3 13:59:29.319449 systemd[1]: audit-rules.service: Deactivated successfully. Mar 3 13:59:29.320314 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 3 13:59:29.330679 systemd-udevd[1401]: Using default interface naming scheme 'v255'. Mar 3 13:59:29.332740 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 3 13:59:29.349300 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 3 13:59:29.373309 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 3 13:59:29.377445 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 3 13:59:29.388725 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 3 13:59:29.406989 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 3 13:59:29.424052 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 3 13:59:29.459869 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 3 13:59:29.476095 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 3 13:59:29.489825 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 3 13:59:29.490119 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 3 13:59:29.497433 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 3 13:59:29.512453 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 3 13:59:29.513816 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 3 13:59:29.521391 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 3 13:59:29.534911 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 3 13:59:29.583420 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 3 13:59:29.584262 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 3 13:59:29.625981 systemd[1]: Finished ensure-sysext.service. Mar 3 13:59:29.678961 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 3 13:59:29.679420 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 3 13:59:29.752392 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 3 13:59:29.776946 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 3 13:59:29.827092 augenrules[1432]: /sbin/augenrules: No change Mar 3 13:59:29.990045 augenrules[1487]: No rules Mar 3 13:59:30.009705 systemd[1]: audit-rules.service: Deactivated successfully. Mar 3 13:59:30.010333 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 3 13:59:30.346890 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 3 13:59:30.347761 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 3 13:59:30.368344 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 3 13:59:30.376103 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 3 13:59:30.390294 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 3 13:59:30.393686 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 3 13:59:30.428681 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 3 13:59:30.762796 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 3 13:59:31.057746 kernel: mousedev: PS/2 mouse device common for all mice Mar 3 13:59:31.135122 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 3 13:59:31.157300 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 3 13:59:31.182881 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 3 13:59:31.240072 systemd-resolved[1395]: Positive Trust Anchors: Mar 3 13:59:31.240097 systemd-resolved[1395]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 3 13:59:31.240143 systemd-resolved[1395]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 3 13:59:31.251272 kernel: ACPI: button: Power Button [PWRF] Mar 3 13:59:31.253828 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 3 13:59:31.254430 systemd[1]: Reached target time-set.target - System Time Set. Mar 3 13:59:31.283101 systemd-networkd[1477]: lo: Link UP Mar 3 13:59:31.284686 systemd-networkd[1477]: lo: Gained carrier Mar 3 13:59:31.286907 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 3 13:59:31.287745 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 3 13:59:31.289703 systemd-networkd[1477]: Enumeration completed Mar 3 13:59:31.292721 systemd-networkd[1477]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 3 13:59:31.292728 systemd-networkd[1477]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 3 13:59:31.295723 systemd-networkd[1477]: eth0: Link UP Mar 3 13:59:31.296712 systemd-networkd[1477]: eth0: Gained carrier Mar 3 13:59:31.296738 systemd-networkd[1477]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 3 13:59:31.310381 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 3 13:59:31.316096 systemd-resolved[1395]: Defaulting to hostname 'linux'. Mar 3 13:59:31.326629 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 3 13:59:31.336941 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 3 13:59:31.353908 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 3 13:59:31.389431 systemd[1]: Reached target network.target - Network. Mar 3 13:59:31.389966 systemd-networkd[1477]: eth0: DHCPv4 address 10.0.0.115/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 3 13:59:31.391785 systemd-timesyncd[1479]: Network configuration changed, trying to establish connection. Mar 3 13:59:32.282042 systemd-timesyncd[1479]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 3 13:59:32.282226 systemd-timesyncd[1479]: Initial clock synchronization to Tue 2026-03-03 13:59:32.281563 UTC. Mar 3 13:59:32.282534 systemd-resolved[1395]: Clock change detected. Flushing caches. Mar 3 13:59:32.290525 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 3 13:59:32.306830 systemd[1]: Reached target sysinit.target - System Initialization. Mar 3 13:59:32.320409 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 3 13:59:32.335128 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 3 13:59:32.349471 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Mar 3 13:59:32.365450 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 3 13:59:32.378190 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 3 13:59:32.394020 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 3 13:59:32.408861 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 3 13:59:32.409026 systemd[1]: Reached target paths.target - Path Units. Mar 3 13:59:32.419078 systemd[1]: Reached target timers.target - Timer Units. Mar 3 13:59:32.437205 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 3 13:59:32.456861 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 3 13:59:32.478948 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 3 13:59:32.492221 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 3 13:59:32.506000 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 3 13:59:32.533259 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 3 13:59:32.547214 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 3 13:59:32.569377 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 3 13:59:32.596875 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 3 13:59:32.613395 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 3 13:59:32.641867 systemd[1]: Reached target sockets.target - Socket Units. Mar 3 13:59:32.655208 systemd[1]: Reached target basic.target - Basic System. Mar 3 13:59:32.668395 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 3 13:59:32.668907 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 3 13:59:32.681243 systemd[1]: Starting containerd.service - containerd container runtime... Mar 3 13:59:32.701438 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 3 13:59:32.719941 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 3 13:59:33.353377 systemd-networkd[1477]: eth0: Gained IPv6LL Mar 3 13:59:33.366885 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 3 13:59:33.385997 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 3 13:59:33.411087 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 3 13:59:33.422471 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Mar 3 13:59:33.439545 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 3 13:59:33.507075 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 3 13:59:33.524883 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 3 13:59:33.554089 jq[1538]: false Mar 3 13:59:33.600233 google_oslogin_nss_cache[1540]: oslogin_cache_refresh[1540]: Refreshing passwd entry cache Mar 3 13:59:33.602185 oslogin_cache_refresh[1540]: Refreshing passwd entry cache Mar 3 13:59:33.625088 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 3 13:59:33.647502 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 3 13:59:33.671091 google_oslogin_nss_cache[1540]: oslogin_cache_refresh[1540]: Failure getting users, quitting Mar 3 13:59:33.671091 google_oslogin_nss_cache[1540]: oslogin_cache_refresh[1540]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 3 13:59:33.671091 google_oslogin_nss_cache[1540]: oslogin_cache_refresh[1540]: Refreshing group entry cache Mar 3 13:59:33.667048 oslogin_cache_refresh[1540]: Failure getting users, quitting Mar 3 13:59:33.671428 extend-filesystems[1539]: Found /dev/vda6 Mar 3 13:59:33.667084 oslogin_cache_refresh[1540]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 3 13:59:33.688893 google_oslogin_nss_cache[1540]: oslogin_cache_refresh[1540]: Failure getting groups, quitting Mar 3 13:59:33.688893 google_oslogin_nss_cache[1540]: oslogin_cache_refresh[1540]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 3 13:59:33.688959 extend-filesystems[1539]: Found /dev/vda9 Mar 3 13:59:33.674250 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 3 13:59:33.667176 oslogin_cache_refresh[1540]: Refreshing group entry cache Mar 3 13:59:33.709176 extend-filesystems[1539]: Checking size of /dev/vda9 Mar 3 13:59:33.683802 oslogin_cache_refresh[1540]: Failure getting groups, quitting Mar 3 13:59:33.719265 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 3 13:59:33.683824 oslogin_cache_refresh[1540]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 3 13:59:33.720910 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 3 13:59:33.729945 systemd[1]: Starting update-engine.service - Update Engine... Mar 3 13:59:33.750966 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 3 13:59:33.854520 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 3 13:59:33.871236 jq[1554]: true Mar 3 13:59:33.909041 extend-filesystems[1539]: Resized partition /dev/vda9 Mar 3 13:59:33.949085 extend-filesystems[1564]: resize2fs 1.47.3 (8-Jul-2025) Mar 3 13:59:33.916961 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 3 13:59:33.924417 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 3 13:59:33.925055 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 3 13:59:33.926055 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Mar 3 13:59:33.926466 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Mar 3 13:59:33.977974 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 3 13:59:34.021937 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 3 13:59:34.757726 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 3 13:59:34.841495 (ntainerd)[1576]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 3 13:59:34.885860 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 3 13:59:35.045894 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 3 13:59:34.910941 systemd[1]: Reached target network-online.target - Network is Online. Mar 3 13:59:35.091442 update_engine[1553]: I20260303 13:59:35.090413 1553 main.cc:92] Flatcar Update Engine starting Mar 3 13:59:35.092781 jq[1572]: true Mar 3 13:59:34.935535 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 3 13:59:34.956279 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:59:34.977493 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 3 13:59:35.022048 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 3 13:59:35.022994 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 13:59:35.055557 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 3 13:59:35.062187 systemd[1]: motdgen.service: Deactivated successfully. Mar 3 13:59:35.062884 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 3 13:59:35.139782 tar[1566]: linux-amd64/LICENSE Mar 3 13:59:35.139782 tar[1566]: linux-amd64/helm Mar 3 13:59:35.141833 extend-filesystems[1564]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 3 13:59:35.141833 extend-filesystems[1564]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 3 13:59:35.141833 extend-filesystems[1564]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 3 13:59:35.266007 extend-filesystems[1539]: Resized filesystem in /dev/vda9 Mar 3 13:59:35.275950 bash[1615]: Updated "/home/core/.ssh/authorized_keys" Mar 3 13:59:35.142196 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 3 13:59:35.195080 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 3 13:59:35.195865 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 3 13:59:35.284087 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 3 13:59:35.317377 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 3 13:59:35.324832 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 3 13:59:35.356503 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 3 13:59:35.357212 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 3 13:59:35.371501 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 3 13:59:35.401451 dbus-daemon[1536]: [system] SELinux support is enabled Mar 3 13:59:35.403411 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 3 13:59:35.445249 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 3 13:59:35.472033 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 3 13:59:35.472075 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 3 13:59:35.486114 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 3 13:59:35.486238 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 3 13:59:35.487905 update_engine[1553]: I20260303 13:59:35.487844 1553 update_check_scheduler.cc:74] Next update check in 11m56s Mar 3 13:59:35.504404 systemd[1]: Started update-engine.service - Update Engine. Mar 3 13:59:36.061912 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 3 13:59:36.976826 kernel: kvm_amd: TSC scaling supported Mar 3 13:59:36.977169 kernel: kvm_amd: Nested Virtualization enabled Mar 3 13:59:36.977206 kernel: kvm_amd: Nested Paging enabled Mar 3 13:59:37.011048 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 3 13:59:37.011147 kernel: kvm_amd: PMU virtualization is disabled Mar 3 13:59:37.066400 sshd_keygen[1563]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 3 13:59:37.113216 systemd-logind[1548]: Watching system buttons on /dev/input/event2 (Power Button) Mar 3 13:59:37.113267 systemd-logind[1548]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 3 13:59:37.115422 systemd-logind[1548]: New seat seat0. Mar 3 13:59:37.130015 systemd[1]: Started systemd-logind.service - User Login Management. Mar 3 13:59:37.432512 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 13:59:37.601977 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 3 13:59:37.609079 locksmithd[1628]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 3 13:59:37.657277 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 3 13:59:37.678044 systemd[1]: Started sshd@0-10.0.0.115:22-10.0.0.1:37152.service - OpenSSH per-connection server daemon (10.0.0.1:37152). Mar 3 13:59:37.820792 systemd[1]: issuegen.service: Deactivated successfully. Mar 3 13:59:37.824919 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 3 13:59:37.855282 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 3 13:59:38.671033 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 3 13:59:38.697989 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 3 13:59:38.715049 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 3 13:59:38.733122 systemd[1]: Reached target getty.target - Login Prompts. Mar 3 13:59:38.767242 kernel: EDAC MC: Ver: 3.0.0 Mar 3 13:59:38.911543 sshd[1648]: Accepted publickey for core from 10.0.0.1 port 37152 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 13:59:38.918796 sshd-session[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:59:38.943018 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 3 13:59:38.963255 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 3 13:59:39.013986 systemd-logind[1548]: New session 1 of user core. Mar 3 13:59:39.568123 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 3 13:59:39.602765 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 3 13:59:39.651782 (systemd)[1666]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 3 13:59:39.665510 systemd-logind[1548]: New session c1 of user core. Mar 3 13:59:41.706922 systemd[1666]: Queued start job for default target default.target. Mar 3 13:59:41.725089 systemd[1666]: Created slice app.slice - User Application Slice. Mar 3 13:59:41.725225 systemd[1666]: Reached target paths.target - Paths. Mar 3 13:59:41.725298 systemd[1666]: Reached target timers.target - Timers. Mar 3 13:59:41.733873 systemd[1666]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 3 13:59:41.833898 systemd[1666]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 3 13:59:41.834890 systemd[1666]: Reached target sockets.target - Sockets. Mar 3 13:59:41.835199 systemd[1666]: Reached target basic.target - Basic System. Mar 3 13:59:41.835493 systemd[1666]: Reached target default.target - Main User Target. Mar 3 13:59:41.835915 systemd[1666]: Startup finished in 2.103s. Mar 3 13:59:41.836019 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 3 13:59:41.863113 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 3 13:59:41.908448 containerd[1576]: time="2026-03-03T13:59:41Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 3 13:59:41.916278 containerd[1576]: time="2026-03-03T13:59:41.915533858Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 3 13:59:41.969248 systemd[1]: Started sshd@1-10.0.0.115:22-10.0.0.1:55754.service - OpenSSH per-connection server daemon (10.0.0.1:55754). Mar 3 13:59:42.564869 containerd[1576]: time="2026-03-03T13:59:42.563063920Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="672.305µs" Mar 3 13:59:42.564869 containerd[1576]: time="2026-03-03T13:59:42.563307144Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 3 13:59:42.567163 containerd[1576]: time="2026-03-03T13:59:42.566549436Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 3 13:59:42.567554 containerd[1576]: time="2026-03-03T13:59:42.567526900Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 3 13:59:42.570016 containerd[1576]: time="2026-03-03T13:59:42.569986070Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 3 13:59:42.570917 containerd[1576]: time="2026-03-03T13:59:42.570887983Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 3 13:59:42.573088 containerd[1576]: time="2026-03-03T13:59:42.573061421Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 3 13:59:42.573797 containerd[1576]: time="2026-03-03T13:59:42.573772778Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 3 13:59:42.575065 containerd[1576]: time="2026-03-03T13:59:42.574857763Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 3 13:59:42.577196 containerd[1576]: time="2026-03-03T13:59:42.577169919Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 3 13:59:42.577476 containerd[1576]: time="2026-03-03T13:59:42.577448429Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 3 13:59:42.577998 containerd[1576]: time="2026-03-03T13:59:42.577974521Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 3 13:59:42.579312 containerd[1576]: time="2026-03-03T13:59:42.578285030Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 3 13:59:42.587856 containerd[1576]: time="2026-03-03T13:59:42.587808095Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 3 13:59:42.589275 containerd[1576]: time="2026-03-03T13:59:42.589241721Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 3 13:59:42.605288 containerd[1576]: time="2026-03-03T13:59:42.605232238Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 3 13:59:42.607202 containerd[1576]: time="2026-03-03T13:59:42.607167751Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 3 13:59:42.643285 tar[1566]: linux-amd64/README.md Mar 3 13:59:42.647690 containerd[1576]: time="2026-03-03T13:59:42.647075563Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 3 13:59:42.648129 containerd[1576]: time="2026-03-03T13:59:42.648013002Z" level=info msg="metadata content store policy set" policy=shared Mar 3 13:59:42.716092 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 3 13:59:42.732875 containerd[1576]: time="2026-03-03T13:59:42.732818477Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 3 13:59:42.734558 containerd[1576]: time="2026-03-03T13:59:42.734176631Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 3 13:59:42.734558 containerd[1576]: time="2026-03-03T13:59:42.734297296Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 3 13:59:42.734558 containerd[1576]: time="2026-03-03T13:59:42.734482673Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 3 13:59:42.734558 containerd[1576]: time="2026-03-03T13:59:42.734739722Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 3 13:59:42.734943 containerd[1576]: time="2026-03-03T13:59:42.734854737Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 3 13:59:42.734943 containerd[1576]: time="2026-03-03T13:59:42.734878792Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 3 13:59:42.734943 containerd[1576]: time="2026-03-03T13:59:42.734896285Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 3 13:59:42.734943 containerd[1576]: time="2026-03-03T13:59:42.734913938Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 3 13:59:42.734943 containerd[1576]: time="2026-03-03T13:59:42.734929757Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 3 13:59:42.734943 containerd[1576]: time="2026-03-03T13:59:42.734941980Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 3 13:59:42.735486 containerd[1576]: time="2026-03-03T13:59:42.734962268Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 3 13:59:42.736200 containerd[1576]: time="2026-03-03T13:59:42.735912842Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 3 13:59:42.737959 containerd[1576]: time="2026-03-03T13:59:42.737265327Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 3 13:59:42.737959 containerd[1576]: time="2026-03-03T13:59:42.737302706Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 3 13:59:42.737959 containerd[1576]: time="2026-03-03T13:59:42.737323825Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 3 13:59:42.737959 containerd[1576]: time="2026-03-03T13:59:42.737438430Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 3 13:59:42.737959 containerd[1576]: time="2026-03-03T13:59:42.737462474Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 3 13:59:42.737959 containerd[1576]: time="2026-03-03T13:59:42.737479476Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 3 13:59:42.737959 containerd[1576]: time="2026-03-03T13:59:42.737491950Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 3 13:59:42.737959 containerd[1576]: time="2026-03-03T13:59:42.737764198Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 3 13:59:42.737959 containerd[1576]: time="2026-03-03T13:59:42.737786960Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 3 13:59:42.737959 containerd[1576]: time="2026-03-03T13:59:42.737804423Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 3 13:59:42.738945 containerd[1576]: time="2026-03-03T13:59:42.738787939Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 3 13:59:42.738945 containerd[1576]: time="2026-03-03T13:59:42.738931918Z" level=info msg="Start snapshots syncer" Mar 3 13:59:42.739236 containerd[1576]: time="2026-03-03T13:59:42.739190360Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 3 13:59:42.741933 containerd[1576]: time="2026-03-03T13:59:42.741256526Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 3 13:59:42.745488 containerd[1576]: time="2026-03-03T13:59:42.742175561Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 3 13:59:42.745488 containerd[1576]: time="2026-03-03T13:59:42.742946821Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 3 13:59:42.745488 containerd[1576]: time="2026-03-03T13:59:42.743199833Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 3 13:59:42.745488 containerd[1576]: time="2026-03-03T13:59:42.743223418Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 3 13:59:42.745488 containerd[1576]: time="2026-03-03T13:59:42.743459819Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 3 13:59:42.745488 containerd[1576]: time="2026-03-03T13:59:42.743482872Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 3 13:59:42.745488 containerd[1576]: time="2026-03-03T13:59:42.743499953Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 3 13:59:42.745488 containerd[1576]: time="2026-03-03T13:59:42.743517626Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 3 13:59:42.745488 containerd[1576]: time="2026-03-03T13:59:42.743535150Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 3 13:59:42.745488 containerd[1576]: time="2026-03-03T13:59:42.743721337Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 3 13:59:42.745488 containerd[1576]: time="2026-03-03T13:59:42.743736936Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 3 13:59:42.745488 containerd[1576]: time="2026-03-03T13:59:42.743747686Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 3 13:59:42.745488 containerd[1576]: time="2026-03-03T13:59:42.744084024Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 3 13:59:42.745488 containerd[1576]: time="2026-03-03T13:59:42.744101367Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 3 13:59:42.746269 containerd[1576]: time="2026-03-03T13:59:42.744109552Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 3 13:59:42.746269 containerd[1576]: time="2026-03-03T13:59:42.744118729Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 3 13:59:42.746269 containerd[1576]: time="2026-03-03T13:59:42.744126002Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 3 13:59:42.746269 containerd[1576]: time="2026-03-03T13:59:42.744136201Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 3 13:59:42.746269 containerd[1576]: time="2026-03-03T13:59:42.744154646Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 3 13:59:42.746269 containerd[1576]: time="2026-03-03T13:59:42.745050468Z" level=info msg="runtime interface created" Mar 3 13:59:42.746269 containerd[1576]: time="2026-03-03T13:59:42.745068331Z" level=info msg="created NRI interface" Mar 3 13:59:42.746269 containerd[1576]: time="2026-03-03T13:59:42.745086395Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 3 13:59:42.746269 containerd[1576]: time="2026-03-03T13:59:42.745255911Z" level=info msg="Connect containerd service" Mar 3 13:59:42.746269 containerd[1576]: time="2026-03-03T13:59:42.745824032Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 3 13:59:42.757263 containerd[1576]: time="2026-03-03T13:59:42.757032532Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 3 13:59:42.765287 sshd[1679]: Accepted publickey for core from 10.0.0.1 port 55754 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 13:59:42.769846 sshd-session[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:59:42.808789 systemd-logind[1548]: New session 2 of user core. Mar 3 13:59:42.821236 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 3 13:59:42.883253 sshd[1688]: Connection closed by 10.0.0.1 port 55754 Mar 3 13:59:42.883158 sshd-session[1679]: pam_unix(sshd:session): session closed for user core Mar 3 13:59:42.901874 systemd[1]: sshd@1-10.0.0.115:22-10.0.0.1:55754.service: Deactivated successfully. Mar 3 13:59:42.905980 systemd[1]: session-2.scope: Deactivated successfully. Mar 3 13:59:42.909323 systemd-logind[1548]: Session 2 logged out. Waiting for processes to exit. Mar 3 13:59:42.917288 systemd[1]: Started sshd@2-10.0.0.115:22-10.0.0.1:55768.service - OpenSSH per-connection server daemon (10.0.0.1:55768). Mar 3 13:59:43.239123 systemd-logind[1548]: Removed session 2. Mar 3 13:59:43.747839 sshd[1694]: Accepted publickey for core from 10.0.0.1 port 55768 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 13:59:43.752557 sshd-session[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:59:43.774114 systemd-logind[1548]: New session 3 of user core. Mar 3 13:59:43.780515 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 3 13:59:43.880920 sshd[1704]: Connection closed by 10.0.0.1 port 55768 Mar 3 13:59:43.886342 sshd-session[1694]: pam_unix(sshd:session): session closed for user core Mar 3 13:59:43.920249 systemd[1]: sshd@2-10.0.0.115:22-10.0.0.1:55768.service: Deactivated successfully. Mar 3 13:59:43.933447 systemd[1]: session-3.scope: Deactivated successfully. Mar 3 13:59:43.944550 systemd-logind[1548]: Session 3 logged out. Waiting for processes to exit. Mar 3 13:59:43.955774 systemd-logind[1548]: Removed session 3. Mar 3 13:59:45.771195 containerd[1576]: time="2026-03-03T13:59:45.769327778Z" level=info msg="Start subscribing containerd event" Mar 3 13:59:45.773549 containerd[1576]: time="2026-03-03T13:59:45.772493512Z" level=info msg="Start recovering state" Mar 3 13:59:45.775876 containerd[1576]: time="2026-03-03T13:59:45.775507026Z" level=info msg="Start event monitor" Mar 3 13:59:45.776059 containerd[1576]: time="2026-03-03T13:59:45.775836793Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 3 13:59:45.776059 containerd[1576]: time="2026-03-03T13:59:45.776040032Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 3 13:59:45.779179 containerd[1576]: time="2026-03-03T13:59:45.777775812Z" level=info msg="Start cni network conf syncer for default" Mar 3 13:59:45.779179 containerd[1576]: time="2026-03-03T13:59:45.777909401Z" level=info msg="Start streaming server" Mar 3 13:59:45.779179 containerd[1576]: time="2026-03-03T13:59:45.778246300Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 3 13:59:45.779179 containerd[1576]: time="2026-03-03T13:59:45.778354432Z" level=info msg="runtime interface starting up..." Mar 3 13:59:45.780305 containerd[1576]: time="2026-03-03T13:59:45.779757766Z" level=info msg="starting plugins..." Mar 3 13:59:45.780305 containerd[1576]: time="2026-03-03T13:59:45.780145380Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 3 13:59:45.788986 containerd[1576]: time="2026-03-03T13:59:45.788552542Z" level=info msg="containerd successfully booted in 3.885682s" Mar 3 13:59:45.790924 systemd[1]: Started containerd.service - containerd container runtime. Mar 3 13:59:46.702167 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:59:46.703342 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 3 13:59:46.704278 systemd[1]: Startup finished in 14.605s (kernel) + 17.754s (initrd) + 29.969s (userspace) = 1min 2.329s. Mar 3 13:59:46.727750 (kubelet)[1723]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 3 13:59:47.741229 kubelet[1723]: E0303 13:59:47.740804 1723 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 3 13:59:47.786027 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 3 13:59:47.786340 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 3 13:59:47.787494 systemd[1]: kubelet.service: Consumed 5.121s CPU time, 257.5M memory peak. Mar 3 13:59:53.917174 systemd[1]: Started sshd@3-10.0.0.115:22-10.0.0.1:52734.service - OpenSSH per-connection server daemon (10.0.0.1:52734). Mar 3 13:59:54.014841 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 52734 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 13:59:54.017385 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:59:54.035269 systemd-logind[1548]: New session 4 of user core. Mar 3 13:59:54.046208 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 3 13:59:54.085301 sshd[1736]: Connection closed by 10.0.0.1 port 52734 Mar 3 13:59:54.086766 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Mar 3 13:59:54.101763 systemd[1]: sshd@3-10.0.0.115:22-10.0.0.1:52734.service: Deactivated successfully. Mar 3 13:59:54.105017 systemd[1]: session-4.scope: Deactivated successfully. Mar 3 13:59:54.108919 systemd-logind[1548]: Session 4 logged out. Waiting for processes to exit. Mar 3 13:59:54.113156 systemd[1]: Started sshd@4-10.0.0.115:22-10.0.0.1:52742.service - OpenSSH per-connection server daemon (10.0.0.1:52742). Mar 3 13:59:54.118224 systemd-logind[1548]: Removed session 4. Mar 3 13:59:54.228119 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 52742 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 13:59:54.230898 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:59:54.248907 systemd-logind[1548]: New session 5 of user core. Mar 3 13:59:54.263300 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 3 13:59:54.289890 sshd[1745]: Connection closed by 10.0.0.1 port 52742 Mar 3 13:59:54.291360 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Mar 3 13:59:54.310238 systemd[1]: sshd@4-10.0.0.115:22-10.0.0.1:52742.service: Deactivated successfully. Mar 3 13:59:54.314525 systemd[1]: session-5.scope: Deactivated successfully. Mar 3 13:59:54.319416 systemd-logind[1548]: Session 5 logged out. Waiting for processes to exit. Mar 3 13:59:54.325281 systemd[1]: Started sshd@5-10.0.0.115:22-10.0.0.1:52754.service - OpenSSH per-connection server daemon (10.0.0.1:52754). Mar 3 13:59:54.330529 systemd-logind[1548]: Removed session 5. Mar 3 13:59:54.419303 sshd[1751]: Accepted publickey for core from 10.0.0.1 port 52754 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 13:59:54.421761 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:59:54.430708 systemd-logind[1548]: New session 6 of user core. Mar 3 13:59:54.443948 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 3 13:59:54.477701 sshd[1754]: Connection closed by 10.0.0.1 port 52754 Mar 3 13:59:54.478776 sshd-session[1751]: pam_unix(sshd:session): session closed for user core Mar 3 13:59:54.491091 systemd[1]: sshd@5-10.0.0.115:22-10.0.0.1:52754.service: Deactivated successfully. Mar 3 13:59:54.494548 systemd[1]: session-6.scope: Deactivated successfully. Mar 3 13:59:54.497033 systemd-logind[1548]: Session 6 logged out. Waiting for processes to exit. Mar 3 13:59:54.503360 systemd[1]: Started sshd@6-10.0.0.115:22-10.0.0.1:52758.service - OpenSSH per-connection server daemon (10.0.0.1:52758). Mar 3 13:59:54.505384 systemd-logind[1548]: Removed session 6. Mar 3 13:59:54.598027 sshd[1760]: Accepted publickey for core from 10.0.0.1 port 52758 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 13:59:54.602709 sshd-session[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:59:54.623330 systemd-logind[1548]: New session 7 of user core. Mar 3 13:59:54.642081 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 3 13:59:54.680967 sudo[1764]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 3 13:59:54.681538 sudo[1764]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 3 13:59:54.707773 sudo[1764]: pam_unix(sudo:session): session closed for user root Mar 3 13:59:54.711159 sshd[1763]: Connection closed by 10.0.0.1 port 52758 Mar 3 13:59:54.712129 sshd-session[1760]: pam_unix(sshd:session): session closed for user core Mar 3 13:59:54.730417 systemd[1]: sshd@6-10.0.0.115:22-10.0.0.1:52758.service: Deactivated successfully. Mar 3 13:59:54.733427 systemd[1]: session-7.scope: Deactivated successfully. Mar 3 13:59:54.735536 systemd-logind[1548]: Session 7 logged out. Waiting for processes to exit. Mar 3 13:59:54.740320 systemd[1]: Started sshd@7-10.0.0.115:22-10.0.0.1:52766.service - OpenSSH per-connection server daemon (10.0.0.1:52766). Mar 3 13:59:54.743291 systemd-logind[1548]: Removed session 7. Mar 3 13:59:54.824748 sshd[1770]: Accepted publickey for core from 10.0.0.1 port 52766 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 13:59:54.827303 sshd-session[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:59:54.838410 systemd-logind[1548]: New session 8 of user core. Mar 3 13:59:54.852027 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 3 13:59:54.880750 sudo[1775]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 3 13:59:54.881277 sudo[1775]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 3 13:59:54.895856 sudo[1775]: pam_unix(sudo:session): session closed for user root Mar 3 13:59:54.910354 sudo[1774]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 3 13:59:54.911106 sudo[1774]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 3 13:59:54.933411 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 3 13:59:55.027407 augenrules[1797]: No rules Mar 3 13:59:55.029359 systemd[1]: audit-rules.service: Deactivated successfully. Mar 3 13:59:55.030045 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 3 13:59:55.032097 sudo[1774]: pam_unix(sudo:session): session closed for user root Mar 3 13:59:55.035729 sshd[1773]: Connection closed by 10.0.0.1 port 52766 Mar 3 13:59:55.036755 sshd-session[1770]: pam_unix(sshd:session): session closed for user core Mar 3 13:59:55.055108 systemd[1]: sshd@7-10.0.0.115:22-10.0.0.1:52766.service: Deactivated successfully. Mar 3 13:59:55.058117 systemd[1]: session-8.scope: Deactivated successfully. Mar 3 13:59:55.060203 systemd-logind[1548]: Session 8 logged out. Waiting for processes to exit. Mar 3 13:59:55.064775 systemd[1]: Started sshd@8-10.0.0.115:22-10.0.0.1:52770.service - OpenSSH per-connection server daemon (10.0.0.1:52770). Mar 3 13:59:55.066838 systemd-logind[1548]: Removed session 8. Mar 3 13:59:55.156064 sshd[1806]: Accepted publickey for core from 10.0.0.1 port 52770 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 13:59:55.159145 sshd-session[1806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:59:55.169273 systemd-logind[1548]: New session 9 of user core. Mar 3 13:59:55.186992 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 3 13:59:55.213430 sudo[1810]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 3 13:59:55.214144 sudo[1810]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 3 13:59:55.792323 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 3 13:59:55.811423 (dockerd)[1830]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 3 13:59:56.239341 dockerd[1830]: time="2026-03-03T13:59:56.238983264Z" level=info msg="Starting up" Mar 3 13:59:56.241691 dockerd[1830]: time="2026-03-03T13:59:56.241384656Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 3 13:59:56.288182 dockerd[1830]: time="2026-03-03T13:59:56.288065034Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 3 13:59:56.425901 dockerd[1830]: time="2026-03-03T13:59:56.424927733Z" level=info msg="Loading containers: start." Mar 3 13:59:56.451747 kernel: Initializing XFRM netlink socket Mar 3 13:59:57.238985 systemd-networkd[1477]: docker0: Link UP Mar 3 13:59:57.248427 dockerd[1830]: time="2026-03-03T13:59:57.248262599Z" level=info msg="Loading containers: done." Mar 3 13:59:57.290181 dockerd[1830]: time="2026-03-03T13:59:57.289983126Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 3 13:59:57.290181 dockerd[1830]: time="2026-03-03T13:59:57.290157492Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 3 13:59:57.290544 dockerd[1830]: time="2026-03-03T13:59:57.290305668Z" level=info msg="Initializing buildkit" Mar 3 13:59:57.353944 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3762471143-merged.mount: Deactivated successfully. Mar 3 13:59:57.366952 dockerd[1830]: time="2026-03-03T13:59:57.366340488Z" level=info msg="Completed buildkit initialization" Mar 3 13:59:57.376410 dockerd[1830]: time="2026-03-03T13:59:57.376003384Z" level=info msg="Daemon has completed initialization" Mar 3 13:59:57.378427 dockerd[1830]: time="2026-03-03T13:59:57.376857213Z" level=info msg="API listen on /run/docker.sock" Mar 3 13:59:57.378113 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 3 13:59:58.011100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 3 13:59:58.024236 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:59:58.624877 containerd[1576]: time="2026-03-03T13:59:58.622145063Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 3 13:59:59.103264 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:59:59.125412 (kubelet)[2059]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 3 13:59:59.256899 kubelet[2059]: E0303 13:59:59.256565 2059 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 3 13:59:59.263070 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 3 13:59:59.263396 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 3 13:59:59.264349 systemd[1]: kubelet.service: Consumed 819ms CPU time, 110.9M memory peak. Mar 3 13:59:59.537029 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3591898970.mount: Deactivated successfully. Mar 3 14:00:07.998530 containerd[1576]: time="2026-03-03T14:00:07.995292980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:00:08.004541 containerd[1576]: time="2026-03-03T14:00:08.004293397Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=27696467" Mar 3 14:00:08.010760 containerd[1576]: time="2026-03-03T14:00:08.010077641Z" level=info msg="ImageCreate event name:\"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:00:08.022474 containerd[1576]: time="2026-03-03T14:00:08.020413952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:00:08.025295 containerd[1576]: time="2026-03-03T14:00:08.023425023Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"27693066\" in 9.401119131s" Mar 3 14:00:08.025295 containerd[1576]: time="2026-03-03T14:00:08.024995197Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\"" Mar 3 14:00:08.043033 containerd[1576]: time="2026-03-03T14:00:08.041417901Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 3 14:00:09.515454 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 3 14:00:09.529378 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 14:00:10.222155 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 14:00:10.288322 (kubelet)[2138]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 3 14:00:10.748431 kubelet[2138]: E0303 14:00:10.744349 2138 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 3 14:00:10.754500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 3 14:00:10.755122 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 3 14:00:10.756022 systemd[1]: kubelet.service: Consumed 559ms CPU time, 110.7M memory peak. Mar 3 14:00:16.977936 containerd[1576]: time="2026-03-03T14:00:16.976151658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:00:16.982251 containerd[1576]: time="2026-03-03T14:00:16.982158936Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=21450700" Mar 3 14:00:16.989400 containerd[1576]: time="2026-03-03T14:00:16.989227302Z" level=info msg="ImageCreate event name:\"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:00:17.026151 containerd[1576]: time="2026-03-03T14:00:17.024173899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:00:17.028278 containerd[1576]: time="2026-03-03T14:00:17.028236843Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"23142311\" in 8.986693375s" Mar 3 14:00:17.028449 containerd[1576]: time="2026-03-03T14:00:17.028421712Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\"" Mar 3 14:00:17.037292 containerd[1576]: time="2026-03-03T14:00:17.035517478Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 3 14:00:20.401850 update_engine[1553]: I20260303 14:00:20.399552 1553 update_attempter.cc:509] Updating boot flags... Mar 3 14:00:20.775236 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 3 14:00:20.803472 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 14:00:21.525353 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 14:00:21.555255 (kubelet)[2176]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 3 14:00:21.692148 containerd[1576]: time="2026-03-03T14:00:21.692083430Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:00:21.703979 containerd[1576]: time="2026-03-03T14:00:21.700470198Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=15548429" Mar 3 14:00:21.708246 containerd[1576]: time="2026-03-03T14:00:21.708049018Z" level=info msg="ImageCreate event name:\"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:00:21.734203 containerd[1576]: time="2026-03-03T14:00:21.734011075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:00:21.739040 containerd[1576]: time="2026-03-03T14:00:21.738264874Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"17240058\" in 4.702097254s" Mar 3 14:00:21.739040 containerd[1576]: time="2026-03-03T14:00:21.738307572Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\"" Mar 3 14:00:21.741214 containerd[1576]: time="2026-03-03T14:00:21.740539923Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 3 14:00:21.949449 kubelet[2176]: E0303 14:00:21.947487 2176 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 3 14:00:21.964124 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 3 14:00:21.964412 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 3 14:00:21.966363 systemd[1]: kubelet.service: Consumed 670ms CPU time, 111M memory peak. Mar 3 14:00:25.546111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1383388691.mount: Deactivated successfully. Mar 3 14:00:28.829461 containerd[1576]: time="2026-03-03T14:00:28.828517861Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:00:28.839340 containerd[1576]: time="2026-03-03T14:00:28.837462553Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=25685312" Mar 3 14:00:28.844470 containerd[1576]: time="2026-03-03T14:00:28.844380195Z" level=info msg="ImageCreate event name:\"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:00:28.858366 containerd[1576]: time="2026-03-03T14:00:28.858298313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:00:28.859503 containerd[1576]: time="2026-03-03T14:00:28.859091320Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"25684331\" in 7.118106318s" Mar 3 14:00:28.859503 containerd[1576]: time="2026-03-03T14:00:28.859226410Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\"" Mar 3 14:00:28.864393 containerd[1576]: time="2026-03-03T14:00:28.864134046Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 3 14:00:29.668547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount663115020.mount: Deactivated successfully. Mar 3 14:00:32.009564 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 3 14:00:32.016354 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 14:00:32.754045 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 14:00:32.793302 (kubelet)[2257]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 3 14:00:33.130155 kubelet[2257]: E0303 14:00:33.129287 2257 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 3 14:00:33.134497 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 3 14:00:33.135081 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 3 14:00:33.137167 systemd[1]: kubelet.service: Consumed 743ms CPU time, 110.6M memory peak. Mar 3 14:00:34.903479 containerd[1576]: time="2026-03-03T14:00:34.903174075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:00:34.908481 containerd[1576]: time="2026-03-03T14:00:34.907455548Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23556542" Mar 3 14:00:34.916368 containerd[1576]: time="2026-03-03T14:00:34.916305143Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:00:34.927473 containerd[1576]: time="2026-03-03T14:00:34.927432573Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:00:34.929000 containerd[1576]: time="2026-03-03T14:00:34.928550464Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 6.064252075s" Mar 3 14:00:34.930117 containerd[1576]: time="2026-03-03T14:00:34.929116103Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Mar 3 14:00:34.930533 containerd[1576]: time="2026-03-03T14:00:34.930327610Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 3 14:00:35.727388 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount70599527.mount: Deactivated successfully. Mar 3 14:00:35.750021 containerd[1576]: time="2026-03-03T14:00:35.748517325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:00:35.750923 containerd[1576]: time="2026-03-03T14:00:35.750898556Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 3 14:00:35.755042 containerd[1576]: time="2026-03-03T14:00:35.755015387Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:00:35.764249 containerd[1576]: time="2026-03-03T14:00:35.764204959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:00:35.765385 containerd[1576]: time="2026-03-03T14:00:35.765238988Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 834.877054ms" Mar 3 14:00:35.765385 containerd[1576]: time="2026-03-03T14:00:35.765278351Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 3 14:00:35.769135 containerd[1576]: time="2026-03-03T14:00:35.767971018Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 3 14:00:36.521561 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4254864957.mount: Deactivated successfully. Mar 3 14:00:42.923327 containerd[1576]: time="2026-03-03T14:00:42.922428152Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:00:42.926381 containerd[1576]: time="2026-03-03T14:00:42.926352811Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23630322" Mar 3 14:00:42.933302 containerd[1576]: time="2026-03-03T14:00:42.932486102Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:00:42.941480 containerd[1576]: time="2026-03-03T14:00:42.941379754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:00:42.944231 containerd[1576]: time="2026-03-03T14:00:42.943338635Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 7.175333815s" Mar 3 14:00:42.944231 containerd[1576]: time="2026-03-03T14:00:42.943380834Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Mar 3 14:00:43.261062 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 3 14:00:43.265445 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 14:00:44.034061 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 14:00:44.084200 (kubelet)[2353]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 3 14:00:44.370508 kubelet[2353]: E0303 14:00:44.369995 2353 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 3 14:00:44.380350 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 3 14:00:44.381440 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 3 14:00:44.385221 systemd[1]: kubelet.service: Consumed 766ms CPU time, 110.5M memory peak. Mar 3 14:00:46.342468 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 14:00:46.345545 systemd[1]: kubelet.service: Consumed 766ms CPU time, 110.5M memory peak. Mar 3 14:00:46.354187 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 14:00:46.468180 systemd[1]: Reload requested from client PID 2380 ('systemctl') (unit session-9.scope)... Mar 3 14:00:46.468347 systemd[1]: Reloading... Mar 3 14:00:46.721989 zram_generator::config[2422]: No configuration found. Mar 3 14:00:47.263527 systemd[1]: Reloading finished in 794 ms. Mar 3 14:00:47.446397 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 3 14:00:47.446532 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 3 14:00:47.447493 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 14:00:47.447552 systemd[1]: kubelet.service: Consumed 292ms CPU time, 98.3M memory peak. Mar 3 14:00:47.456434 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 14:00:48.014332 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 14:00:48.052501 (kubelet)[2472]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 3 14:00:48.430240 kubelet[2472]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 3 14:00:48.863332 kubelet[2472]: I0303 14:00:48.862286 2472 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 3 14:00:48.863332 kubelet[2472]: I0303 14:00:48.862487 2472 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 3 14:00:48.863332 kubelet[2472]: I0303 14:00:48.862518 2472 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 3 14:00:48.863332 kubelet[2472]: I0303 14:00:48.862528 2472 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 3 14:00:48.863332 kubelet[2472]: I0303 14:00:48.863215 2472 server.go:951] "Client rotation is on, will bootstrap in background" Mar 3 14:00:48.893245 kubelet[2472]: I0303 14:00:48.893060 2472 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 3 14:00:48.896314 kubelet[2472]: E0303 14:00:48.896097 2472 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.115:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 3 14:00:48.917139 kubelet[2472]: I0303 14:00:48.912516 2472 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 3 14:00:48.949247 kubelet[2472]: I0303 14:00:48.948520 2472 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 3 14:00:48.953259 kubelet[2472]: I0303 14:00:48.950551 2472 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 3 14:00:48.953259 kubelet[2472]: I0303 14:00:48.951115 2472 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 3 14:00:48.954337 kubelet[2472]: I0303 14:00:48.953287 2472 topology_manager.go:143] "Creating topology manager with none policy" Mar 3 14:00:48.954337 kubelet[2472]: I0303 14:00:48.953300 2472 container_manager_linux.go:308] "Creating device plugin manager" Mar 3 14:00:48.954337 kubelet[2472]: I0303 14:00:48.953439 2472 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 3 14:00:48.963443 kubelet[2472]: I0303 14:00:48.962567 2472 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 3 14:00:48.964296 kubelet[2472]: I0303 14:00:48.964118 2472 kubelet.go:482] "Attempting to sync node with API server" Mar 3 14:00:48.964296 kubelet[2472]: I0303 14:00:48.964137 2472 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 3 14:00:48.964296 kubelet[2472]: I0303 14:00:48.964167 2472 kubelet.go:394] "Adding apiserver pod source" Mar 3 14:00:48.964296 kubelet[2472]: I0303 14:00:48.964182 2472 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 3 14:00:48.979149 kubelet[2472]: I0303 14:00:48.976387 2472 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 3 14:00:48.996009 kubelet[2472]: I0303 14:00:48.995141 2472 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 3 14:00:48.996009 kubelet[2472]: I0303 14:00:48.995213 2472 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 3 14:00:48.996009 kubelet[2472]: W0303 14:00:48.995461 2472 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 3 14:00:49.054243 kubelet[2472]: I0303 14:00:49.054019 2472 server.go:1257] "Started kubelet" Mar 3 14:00:49.061103 kubelet[2472]: I0303 14:00:49.059235 2472 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 3 14:00:49.070036 kubelet[2472]: I0303 14:00:49.069100 2472 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 3 14:00:49.074461 kubelet[2472]: I0303 14:00:49.074051 2472 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 3 14:00:49.079330 kubelet[2472]: I0303 14:00:49.078235 2472 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 3 14:00:49.159553 kubelet[2472]: E0303 14:00:49.127462 2472 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.115:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.115:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1899599aa39710c1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-03 14:00:49.051422913 +0000 UTC m=+0.969149570,LastTimestamp:2026-03-03 14:00:49.051422913 +0000 UTC m=+0.969149570,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 3 14:00:49.183136 kubelet[2472]: I0303 14:00:49.180552 2472 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 3 14:00:49.189276 kubelet[2472]: I0303 14:00:49.189002 2472 server.go:317] "Adding debug handlers to kubelet server" Mar 3 14:00:49.197083 kubelet[2472]: I0303 14:00:49.181140 2472 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 3 14:00:49.197410 kubelet[2472]: I0303 14:00:49.197392 2472 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 3 14:00:49.202245 kubelet[2472]: E0303 14:00:49.201297 2472 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 3 14:00:49.226482 kubelet[2472]: I0303 14:00:49.221323 2472 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 3 14:00:49.235528 kubelet[2472]: I0303 14:00:49.234247 2472 reconciler.go:29] "Reconciler: start to sync state" Mar 3 14:00:49.236416 kubelet[2472]: E0303 14:00:49.236229 2472 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 3 14:00:49.236416 kubelet[2472]: E0303 14:00:49.236235 2472 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="200ms" Mar 3 14:00:49.259192 kubelet[2472]: I0303 14:00:49.255172 2472 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 3 14:00:49.299088 kubelet[2472]: I0303 14:00:49.293553 2472 factory.go:223] Registration of the containerd container factory successfully Mar 3 14:00:49.304044 kubelet[2472]: I0303 14:00:49.301168 2472 factory.go:223] Registration of the systemd container factory successfully Mar 3 14:00:49.328533 kubelet[2472]: E0303 14:00:49.328112 2472 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 3 14:00:49.455438 kubelet[2472]: E0303 14:00:49.447301 2472 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 3 14:00:49.475392 kubelet[2472]: E0303 14:00:49.450173 2472 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="400ms" Mar 3 14:00:49.551409 kubelet[2472]: E0303 14:00:49.550553 2472 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 3 14:00:49.575230 kubelet[2472]: I0303 14:00:49.575071 2472 cpu_manager.go:225] "Starting" policy="none" Mar 3 14:00:49.575230 kubelet[2472]: I0303 14:00:49.575096 2472 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 3 14:00:49.575230 kubelet[2472]: I0303 14:00:49.575120 2472 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 3 14:00:49.587216 kubelet[2472]: I0303 14:00:49.586469 2472 policy_none.go:50] "Start" Mar 3 14:00:49.587216 kubelet[2472]: I0303 14:00:49.587072 2472 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 3 14:00:49.587216 kubelet[2472]: I0303 14:00:49.587095 2472 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 3 14:00:49.602449 kubelet[2472]: I0303 14:00:49.602277 2472 policy_none.go:44] "Start" Mar 3 14:00:49.656021 kubelet[2472]: E0303 14:00:49.655025 2472 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 3 14:00:49.656021 kubelet[2472]: I0303 14:00:49.655260 2472 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 3 14:00:49.663491 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 3 14:00:49.671312 kubelet[2472]: I0303 14:00:49.670546 2472 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 3 14:00:49.675354 kubelet[2472]: I0303 14:00:49.675036 2472 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 3 14:00:49.675354 kubelet[2472]: I0303 14:00:49.675229 2472 kubelet.go:2501] "Starting kubelet main sync loop" Mar 3 14:00:49.675354 kubelet[2472]: E0303 14:00:49.675315 2472 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 3 14:00:49.715260 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 3 14:00:49.732426 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 3 14:00:49.756407 kubelet[2472]: E0303 14:00:49.755380 2472 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 3 14:00:49.792244 kubelet[2472]: E0303 14:00:49.791418 2472 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 3 14:00:49.804970 kubelet[2472]: I0303 14:00:49.804396 2472 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 3 14:00:49.813403 kubelet[2472]: I0303 14:00:49.812048 2472 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 3 14:00:49.821547 kubelet[2472]: E0303 14:00:49.797332 2472 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 3 14:00:49.833416 kubelet[2472]: I0303 14:00:49.832502 2472 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 3 14:00:49.850551 kubelet[2472]: E0303 14:00:49.848526 2472 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 3 14:00:49.852258 kubelet[2472]: E0303 14:00:49.851960 2472 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 3 14:00:49.873521 kubelet[2472]: E0303 14:00:49.871313 2472 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="800ms" Mar 3 14:00:49.938157 kubelet[2472]: I0303 14:00:49.935251 2472 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 3 14:00:49.940076 kubelet[2472]: E0303 14:00:49.939328 2472 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Mar 3 14:00:50.093218 systemd[1]: Created slice kubepods-burstable-pod506f6b5f3d20a7a0533f945e1fe70f3a.slice - libcontainer container kubepods-burstable-pod506f6b5f3d20a7a0533f945e1fe70f3a.slice. Mar 3 14:00:50.121102 kubelet[2472]: E0303 14:00:50.121072 2472 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 14:00:50.130234 systemd[1]: Created slice kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice - libcontainer container kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice. Mar 3 14:00:50.150942 kubelet[2472]: I0303 14:00:50.150159 2472 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 3 14:00:50.152262 kubelet[2472]: E0303 14:00:50.152071 2472 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Mar 3 14:00:50.155721 kubelet[2472]: E0303 14:00:50.155491 2472 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 14:00:50.164015 kubelet[2472]: I0303 14:00:50.160225 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/506f6b5f3d20a7a0533f945e1fe70f3a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"506f6b5f3d20a7a0533f945e1fe70f3a\") " pod="kube-system/kube-apiserver-localhost" Mar 3 14:00:50.164015 kubelet[2472]: I0303 14:00:50.162353 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 14:00:50.164154 kubelet[2472]: I0303 14:00:50.164029 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 14:00:50.164154 kubelet[2472]: I0303 14:00:50.164063 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 14:00:50.164154 kubelet[2472]: I0303 14:00:50.164090 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/506f6b5f3d20a7a0533f945e1fe70f3a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"506f6b5f3d20a7a0533f945e1fe70f3a\") " pod="kube-system/kube-apiserver-localhost" Mar 3 14:00:50.164154 kubelet[2472]: I0303 14:00:50.164108 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 14:00:50.164154 kubelet[2472]: I0303 14:00:50.164126 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 14:00:50.164315 kubelet[2472]: I0303 14:00:50.164148 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 3 14:00:50.172409 systemd[1]: Created slice kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice - libcontainer container kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice. Mar 3 14:00:50.176050 kubelet[2472]: I0303 14:00:50.176024 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/506f6b5f3d20a7a0533f945e1fe70f3a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"506f6b5f3d20a7a0533f945e1fe70f3a\") " pod="kube-system/kube-apiserver-localhost" Mar 3 14:00:50.182433 kubelet[2472]: E0303 14:00:50.182135 2472 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 14:00:50.435220 kubelet[2472]: E0303 14:00:50.434172 2472 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:00:50.440949 containerd[1576]: time="2026-03-03T14:00:50.440268634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:506f6b5f3d20a7a0533f945e1fe70f3a,Namespace:kube-system,Attempt:0,}" Mar 3 14:00:50.474564 kubelet[2472]: E0303 14:00:50.474195 2472 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:00:50.475567 containerd[1576]: time="2026-03-03T14:00:50.475536384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,}" Mar 3 14:00:50.499991 kubelet[2472]: E0303 14:00:50.499320 2472 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:00:50.501564 containerd[1576]: time="2026-03-03T14:00:50.501379969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,}" Mar 3 14:00:50.568155 kubelet[2472]: I0303 14:00:50.567176 2472 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 3 14:00:50.568155 kubelet[2472]: E0303 14:00:50.568042 2472 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Mar 3 14:00:50.676933 kubelet[2472]: E0303 14:00:50.676483 2472 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="1.6s" Mar 3 14:00:51.056252 kubelet[2472]: E0303 14:00:51.055420 2472 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.115:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 3 14:00:51.411280 kubelet[2472]: I0303 14:00:51.410046 2472 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 3 14:00:51.412126 kubelet[2472]: E0303 14:00:51.411988 2472 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Mar 3 14:00:51.438286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3205189111.mount: Deactivated successfully. Mar 3 14:00:51.497181 containerd[1576]: time="2026-03-03T14:00:51.496422876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 3 14:00:51.509290 containerd[1576]: time="2026-03-03T14:00:51.506109358Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 3 14:00:51.517365 containerd[1576]: time="2026-03-03T14:00:51.516134700Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 3 14:00:51.520292 containerd[1576]: time="2026-03-03T14:00:51.519525305Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 3 14:00:51.527075 containerd[1576]: time="2026-03-03T14:00:51.525414096Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 3 14:00:51.531409 containerd[1576]: time="2026-03-03T14:00:51.531374939Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 3 14:00:51.540169 containerd[1576]: time="2026-03-03T14:00:51.539078010Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 3 14:00:51.549442 containerd[1576]: time="2026-03-03T14:00:51.549263802Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.06678522s" Mar 3 14:00:51.589494 containerd[1576]: time="2026-03-03T14:00:51.588383392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 3 14:00:51.589494 containerd[1576]: time="2026-03-03T14:00:51.589149333Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.080083565s" Mar 3 14:00:51.656343 containerd[1576]: time="2026-03-03T14:00:51.653487318Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.197941608s" Mar 3 14:00:51.920328 containerd[1576]: time="2026-03-03T14:00:51.919483268Z" level=info msg="connecting to shim 6f8ad6d510b5987e01988ecf44dc4d8f1d998f719bf1a1fe9ce08f2ac8109a27" address="unix:///run/containerd/s/0fbe6dc9b38a4cbb360d7321272f16e7bc945ac9fb2d583effeb54063f32bc07" namespace=k8s.io protocol=ttrpc version=3 Mar 3 14:00:51.979274 containerd[1576]: time="2026-03-03T14:00:51.979056000Z" level=info msg="connecting to shim e4342583a0a758c10110058cffa9352e06e7d4d11cd5e7a2ac4973424279fa27" address="unix:///run/containerd/s/71a1972936b7d4f2665da105ebf193a1b38c05364bb3cd387b03c17296848df5" namespace=k8s.io protocol=ttrpc version=3 Mar 3 14:00:52.020076 containerd[1576]: time="2026-03-03T14:00:52.018979276Z" level=info msg="connecting to shim bec53db874f33e00ee50ce6d0b63599971d4b42b3979e6a05ad2be267ffd1abe" address="unix:///run/containerd/s/9d1ee3ff08b388c0666c65d15533871a0fb11addd5576b4e3b24907f59dd4cf5" namespace=k8s.io protocol=ttrpc version=3 Mar 3 14:00:52.213982 systemd[1]: Started cri-containerd-6f8ad6d510b5987e01988ecf44dc4d8f1d998f719bf1a1fe9ce08f2ac8109a27.scope - libcontainer container 6f8ad6d510b5987e01988ecf44dc4d8f1d998f719bf1a1fe9ce08f2ac8109a27. Mar 3 14:00:52.250428 systemd[1]: Started cri-containerd-bec53db874f33e00ee50ce6d0b63599971d4b42b3979e6a05ad2be267ffd1abe.scope - libcontainer container bec53db874f33e00ee50ce6d0b63599971d4b42b3979e6a05ad2be267ffd1abe. Mar 3 14:00:52.280382 kubelet[2472]: E0303 14:00:52.279449 2472 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="3.2s" Mar 3 14:00:52.332513 systemd[1]: Started cri-containerd-e4342583a0a758c10110058cffa9352e06e7d4d11cd5e7a2ac4973424279fa27.scope - libcontainer container e4342583a0a758c10110058cffa9352e06e7d4d11cd5e7a2ac4973424279fa27. Mar 3 14:00:52.622544 containerd[1576]: time="2026-03-03T14:00:52.622244141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,} returns sandbox id \"bec53db874f33e00ee50ce6d0b63599971d4b42b3979e6a05ad2be267ffd1abe\"" Mar 3 14:00:52.642237 kubelet[2472]: E0303 14:00:52.641163 2472 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:00:52.667102 containerd[1576]: time="2026-03-03T14:00:52.666445360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4342583a0a758c10110058cffa9352e06e7d4d11cd5e7a2ac4973424279fa27\"" Mar 3 14:00:52.672145 containerd[1576]: time="2026-03-03T14:00:52.671294981Z" level=info msg="CreateContainer within sandbox \"bec53db874f33e00ee50ce6d0b63599971d4b42b3979e6a05ad2be267ffd1abe\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 3 14:00:52.674406 kubelet[2472]: E0303 14:00:52.673968 2472 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:00:52.695247 containerd[1576]: time="2026-03-03T14:00:52.694395514Z" level=info msg="CreateContainer within sandbox \"e4342583a0a758c10110058cffa9352e06e7d4d11cd5e7a2ac4973424279fa27\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 3 14:00:52.737441 containerd[1576]: time="2026-03-03T14:00:52.736521174Z" level=info msg="Container c5d708f93d478fcb69fa5c971ff5ed15ec638d15b7349a2970e9970f9861f7b2: CDI devices from CRI Config.CDIDevices: []" Mar 3 14:00:52.738522 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4153818316.mount: Deactivated successfully. Mar 3 14:00:52.751233 containerd[1576]: time="2026-03-03T14:00:52.750361076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:506f6b5f3d20a7a0533f945e1fe70f3a,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f8ad6d510b5987e01988ecf44dc4d8f1d998f719bf1a1fe9ce08f2ac8109a27\"" Mar 3 14:00:52.756393 kubelet[2472]: E0303 14:00:52.756000 2472 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:00:52.766494 containerd[1576]: time="2026-03-03T14:00:52.765027452Z" level=info msg="Container 54bfec57d010fa45939c403f2a38f69bc1ea74d56760d29b7940eed4bf125901: CDI devices from CRI Config.CDIDevices: []" Mar 3 14:00:52.779973 containerd[1576]: time="2026-03-03T14:00:52.779435147Z" level=info msg="CreateContainer within sandbox \"6f8ad6d510b5987e01988ecf44dc4d8f1d998f719bf1a1fe9ce08f2ac8109a27\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 3 14:00:52.797941 containerd[1576]: time="2026-03-03T14:00:52.797424072Z" level=info msg="CreateContainer within sandbox \"bec53db874f33e00ee50ce6d0b63599971d4b42b3979e6a05ad2be267ffd1abe\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"54bfec57d010fa45939c403f2a38f69bc1ea74d56760d29b7940eed4bf125901\"" Mar 3 14:00:52.800153 containerd[1576]: time="2026-03-03T14:00:52.799248422Z" level=info msg="CreateContainer within sandbox \"e4342583a0a758c10110058cffa9352e06e7d4d11cd5e7a2ac4973424279fa27\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c5d708f93d478fcb69fa5c971ff5ed15ec638d15b7349a2970e9970f9861f7b2\"" Mar 3 14:00:52.801312 containerd[1576]: time="2026-03-03T14:00:52.801263579Z" level=info msg="StartContainer for \"c5d708f93d478fcb69fa5c971ff5ed15ec638d15b7349a2970e9970f9861f7b2\"" Mar 3 14:00:52.804139 containerd[1576]: time="2026-03-03T14:00:52.802987360Z" level=info msg="StartContainer for \"54bfec57d010fa45939c403f2a38f69bc1ea74d56760d29b7940eed4bf125901\"" Mar 3 14:00:52.813421 containerd[1576]: time="2026-03-03T14:00:52.813385781Z" level=info msg="connecting to shim c5d708f93d478fcb69fa5c971ff5ed15ec638d15b7349a2970e9970f9861f7b2" address="unix:///run/containerd/s/71a1972936b7d4f2665da105ebf193a1b38c05364bb3cd387b03c17296848df5" protocol=ttrpc version=3 Mar 3 14:00:52.816140 containerd[1576]: time="2026-03-03T14:00:52.815428766Z" level=info msg="connecting to shim 54bfec57d010fa45939c403f2a38f69bc1ea74d56760d29b7940eed4bf125901" address="unix:///run/containerd/s/9d1ee3ff08b388c0666c65d15533871a0fb11addd5576b4e3b24907f59dd4cf5" protocol=ttrpc version=3 Mar 3 14:00:52.830231 containerd[1576]: time="2026-03-03T14:00:52.829198134Z" level=info msg="Container 1d61a727631dba21f3b8a4df8f07fa211d56f0f3c5e953dfbac04810a13ae6c1: CDI devices from CRI Config.CDIDevices: []" Mar 3 14:00:52.856259 containerd[1576]: time="2026-03-03T14:00:52.855424897Z" level=info msg="CreateContainer within sandbox \"6f8ad6d510b5987e01988ecf44dc4d8f1d998f719bf1a1fe9ce08f2ac8109a27\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1d61a727631dba21f3b8a4df8f07fa211d56f0f3c5e953dfbac04810a13ae6c1\"" Mar 3 14:00:52.861252 containerd[1576]: time="2026-03-03T14:00:52.860449176Z" level=info msg="StartContainer for \"1d61a727631dba21f3b8a4df8f07fa211d56f0f3c5e953dfbac04810a13ae6c1\"" Mar 3 14:00:52.888391 containerd[1576]: time="2026-03-03T14:00:52.886526188Z" level=info msg="connecting to shim 1d61a727631dba21f3b8a4df8f07fa211d56f0f3c5e953dfbac04810a13ae6c1" address="unix:///run/containerd/s/0fbe6dc9b38a4cbb360d7321272f16e7bc945ac9fb2d583effeb54063f32bc07" protocol=ttrpc version=3 Mar 3 14:00:52.910118 systemd[1]: Started cri-containerd-54bfec57d010fa45939c403f2a38f69bc1ea74d56760d29b7940eed4bf125901.scope - libcontainer container 54bfec57d010fa45939c403f2a38f69bc1ea74d56760d29b7940eed4bf125901. Mar 3 14:00:52.967254 systemd[1]: Started cri-containerd-1d61a727631dba21f3b8a4df8f07fa211d56f0f3c5e953dfbac04810a13ae6c1.scope - libcontainer container 1d61a727631dba21f3b8a4df8f07fa211d56f0f3c5e953dfbac04810a13ae6c1. Mar 3 14:00:53.014375 systemd[1]: Started cri-containerd-c5d708f93d478fcb69fa5c971ff5ed15ec638d15b7349a2970e9970f9861f7b2.scope - libcontainer container c5d708f93d478fcb69fa5c971ff5ed15ec638d15b7349a2970e9970f9861f7b2. Mar 3 14:00:53.018287 kubelet[2472]: I0303 14:00:53.017957 2472 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 3 14:00:53.024143 kubelet[2472]: E0303 14:00:53.023463 2472 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Mar 3 14:00:53.213039 containerd[1576]: time="2026-03-03T14:00:53.209332755Z" level=info msg="StartContainer for \"54bfec57d010fa45939c403f2a38f69bc1ea74d56760d29b7940eed4bf125901\" returns successfully" Mar 3 14:00:53.261949 containerd[1576]: time="2026-03-03T14:00:53.261488377Z" level=info msg="StartContainer for \"1d61a727631dba21f3b8a4df8f07fa211d56f0f3c5e953dfbac04810a13ae6c1\" returns successfully" Mar 3 14:00:53.355141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2568165909.mount: Deactivated successfully. Mar 3 14:00:53.453305 containerd[1576]: time="2026-03-03T14:00:53.451471804Z" level=info msg="StartContainer for \"c5d708f93d478fcb69fa5c971ff5ed15ec638d15b7349a2970e9970f9861f7b2\" returns successfully" Mar 3 14:00:53.846269 kubelet[2472]: E0303 14:00:53.846200 2472 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 14:00:53.852234 kubelet[2472]: E0303 14:00:53.851122 2472 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:00:53.879057 kubelet[2472]: E0303 14:00:53.877223 2472 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 14:00:53.879057 kubelet[2472]: E0303 14:00:53.877455 2472 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:00:53.879057 kubelet[2472]: E0303 14:00:53.878435 2472 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 14:00:53.881514 kubelet[2472]: E0303 14:00:53.881474 2472 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:00:54.899188 kubelet[2472]: E0303 14:00:54.895193 2472 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 14:00:54.899188 kubelet[2472]: E0303 14:00:54.895406 2472 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:00:54.913308 kubelet[2472]: E0303 14:00:54.913249 2472 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 14:00:54.915058 kubelet[2472]: E0303 14:00:54.914473 2472 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:00:54.921286 kubelet[2472]: E0303 14:00:54.921238 2472 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 14:00:54.924183 kubelet[2472]: E0303 14:00:54.924139 2472 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:00:55.915440 kubelet[2472]: E0303 14:00:55.915397 2472 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 14:00:55.920437 kubelet[2472]: E0303 14:00:55.920250 2472 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 14:00:55.921357 kubelet[2472]: E0303 14:00:55.921063 2472 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:00:55.921357 kubelet[2472]: E0303 14:00:55.921072 2472 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:00:55.921357 kubelet[2472]: E0303 14:00:55.916064 2472 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 14:00:55.921522 kubelet[2472]: E0303 14:00:55.921444 2472 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:00:56.249545 kubelet[2472]: I0303 14:00:56.246433 2472 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 3 14:00:59.860522 kubelet[2472]: E0303 14:00:59.857536 2472 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 3 14:01:05.771287 kubelet[2472]: E0303 14:01:05.768152 2472 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Mar 3 14:01:07.463227 kubelet[2472]: E0303 14:01:07.461981 2472 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.115:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 3 14:01:08.068244 kubelet[2472]: E0303 14:01:07.548104 2472 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Mar 3 14:01:08.113274 kubelet[2472]: E0303 14:01:08.101369 2472 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.115:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.1899599aa39710c1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-03 14:00:49.051422913 +0000 UTC m=+0.969149570,LastTimestamp:2026-03-03 14:00:49.051422913 +0000 UTC m=+0.969149570,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 3 14:01:09.624082 kubelet[2472]: E0303 14:01:09.622877 2472 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 14:01:09.624082 kubelet[2472]: E0303 14:01:09.624372 2472 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 14:01:09.682449 kubelet[2472]: E0303 14:01:09.681834 2472 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:01:09.709860 kubelet[2472]: E0303 14:01:09.702423 2472 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 14:01:09.714943 kubelet[2472]: E0303 14:01:09.714487 2472 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:01:09.718556 kubelet[2472]: E0303 14:01:09.715493 2472 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:01:09.873927 kubelet[2472]: E0303 14:01:09.872009 2472 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 3 14:01:11.001524 kubelet[2472]: E0303 14:01:11.000035 2472 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 14:01:11.001524 kubelet[2472]: E0303 14:01:11.000352 2472 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:01:14.413265 kubelet[2472]: I0303 14:01:14.409430 2472 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 3 14:01:19.885116 kubelet[2472]: E0303 14:01:19.878983 2472 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 3 14:01:22.875162 kubelet[2472]: I0303 14:01:22.870902 2472 apiserver.go:52] "Watching apiserver" Mar 3 14:01:23.042346 kubelet[2472]: E0303 14:01:23.042306 2472 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 3 14:01:23.155935 kubelet[2472]: I0303 14:01:23.138009 2472 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 3 14:01:23.206401 kubelet[2472]: E0303 14:01:23.201411 2472 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1899599aa39710c1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-03 14:00:49.051422913 +0000 UTC m=+0.969149570,LastTimestamp:2026-03-03 14:00:49.051422913 +0000 UTC m=+0.969149570,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 3 14:01:23.206401 kubelet[2472]: I0303 14:01:23.204200 2472 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 3 14:01:23.206401 kubelet[2472]: E0303 14:01:23.204228 2472 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 3 14:01:23.236218 kubelet[2472]: I0303 14:01:23.234552 2472 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 3 14:01:23.337118 kubelet[2472]: I0303 14:01:23.336151 2472 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 3 14:01:23.340284 kubelet[2472]: E0303 14:01:23.338433 2472 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:01:23.373224 kubelet[2472]: I0303 14:01:23.372400 2472 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 3 14:01:23.377432 kubelet[2472]: E0303 14:01:23.377244 2472 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:01:23.425125 kubelet[2472]: E0303 14:01:23.421093 2472 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:01:28.134846 systemd[1]: Reload requested from client PID 2766 ('systemctl') (unit session-9.scope)... Mar 3 14:01:28.135016 systemd[1]: Reloading... Mar 3 14:01:28.440909 kubelet[2472]: E0303 14:01:28.437032 2472 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:01:28.565084 kubelet[2472]: I0303 14:01:28.564256 2472 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=5.564112675 podStartE2EDuration="5.564112675s" podCreationTimestamp="2026-03-03 14:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 14:01:28.527934885 +0000 UTC m=+40.445661533" watchObservedRunningTime="2026-03-03 14:01:28.564112675 +0000 UTC m=+40.481839313" Mar 3 14:01:28.567342 kubelet[2472]: I0303 14:01:28.567182 2472 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=5.5671724959999995 podStartE2EDuration="5.567172496s" podCreationTimestamp="2026-03-03 14:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 14:01:28.56605238 +0000 UTC m=+40.483779028" watchObservedRunningTime="2026-03-03 14:01:28.567172496 +0000 UTC m=+40.484899134" Mar 3 14:01:28.679124 zram_generator::config[2809]: No configuration found. Mar 3 14:01:29.345209 systemd[1]: Reloading finished in 1207 ms. Mar 3 14:01:29.471347 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 14:01:29.506529 systemd[1]: kubelet.service: Deactivated successfully. Mar 3 14:01:29.511181 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 14:01:29.511239 systemd[1]: kubelet.service: Consumed 14.121s CPU time, 129.7M memory peak. Mar 3 14:01:29.523457 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 14:01:33.018553 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 14:01:33.064153 (kubelet)[2855]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 3 14:01:33.456044 kubelet[2855]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 3 14:01:33.510918 kubelet[2855]: I0303 14:01:33.503026 2855 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 3 14:01:33.510918 kubelet[2855]: I0303 14:01:33.503210 2855 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 3 14:01:33.510918 kubelet[2855]: I0303 14:01:33.503231 2855 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 3 14:01:33.510918 kubelet[2855]: I0303 14:01:33.503237 2855 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 3 14:01:33.510918 kubelet[2855]: I0303 14:01:33.508327 2855 server.go:951] "Client rotation is on, will bootstrap in background" Mar 3 14:01:33.512131 kubelet[2855]: I0303 14:01:33.511537 2855 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 3 14:01:33.544400 kubelet[2855]: I0303 14:01:33.544220 2855 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 3 14:01:33.685101 kubelet[2855]: I0303 14:01:33.684093 2855 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 3 14:01:33.768397 kubelet[2855]: I0303 14:01:33.768092 2855 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 3 14:01:33.771362 kubelet[2855]: I0303 14:01:33.768567 2855 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 3 14:01:33.771830 kubelet[2855]: I0303 14:01:33.771365 2855 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 3 14:01:33.772285 kubelet[2855]: I0303 14:01:33.772018 2855 topology_manager.go:143] "Creating topology manager with none policy" Mar 3 14:01:33.772285 kubelet[2855]: I0303 14:01:33.772037 2855 container_manager_linux.go:308] "Creating device plugin manager" Mar 3 14:01:33.772285 kubelet[2855]: I0303 14:01:33.772071 2855 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 3 14:01:33.772370 kubelet[2855]: I0303 14:01:33.772322 2855 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 3 14:01:33.773523 kubelet[2855]: I0303 14:01:33.773263 2855 kubelet.go:482] "Attempting to sync node with API server" Mar 3 14:01:33.773523 kubelet[2855]: I0303 14:01:33.773429 2855 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 3 14:01:33.773523 kubelet[2855]: I0303 14:01:33.773454 2855 kubelet.go:394] "Adding apiserver pod source" Mar 3 14:01:33.773523 kubelet[2855]: I0303 14:01:33.773466 2855 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 3 14:01:33.795566 kubelet[2855]: I0303 14:01:33.794403 2855 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 3 14:01:33.803105 kubelet[2855]: I0303 14:01:33.802188 2855 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 3 14:01:33.803285 kubelet[2855]: I0303 14:01:33.803267 2855 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 3 14:01:33.905393 kubelet[2855]: I0303 14:01:33.905363 2855 server.go:1257] "Started kubelet" Mar 3 14:01:33.913199 kubelet[2855]: I0303 14:01:33.913164 2855 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 3 14:01:33.936239 kubelet[2855]: I0303 14:01:33.936183 2855 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 3 14:01:33.937074 kubelet[2855]: I0303 14:01:33.937051 2855 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 3 14:01:33.961832 kubelet[2855]: I0303 14:01:33.959526 2855 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 3 14:01:33.975070 kubelet[2855]: I0303 14:01:33.971458 2855 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 3 14:01:33.991422 kubelet[2855]: I0303 14:01:33.990412 2855 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 3 14:01:34.002493 kubelet[2855]: I0303 14:01:34.001417 2855 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 3 14:01:34.012241 kubelet[2855]: I0303 14:01:34.009430 2855 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 3 14:01:34.012241 kubelet[2855]: I0303 14:01:34.011271 2855 reconciler.go:29] "Reconciler: start to sync state" Mar 3 14:01:34.017859 kubelet[2855]: I0303 14:01:34.015255 2855 server.go:317] "Adding debug handlers to kubelet server" Mar 3 14:01:34.061928 kubelet[2855]: I0303 14:01:34.061837 2855 factory.go:223] Registration of the containerd container factory successfully Mar 3 14:01:34.072118 kubelet[2855]: I0303 14:01:34.069910 2855 factory.go:223] Registration of the systemd container factory successfully Mar 3 14:01:34.072118 kubelet[2855]: I0303 14:01:34.070908 2855 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 3 14:01:34.082268 kubelet[2855]: E0303 14:01:34.069147 2855 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 3 14:01:34.169468 kubelet[2855]: I0303 14:01:34.165840 2855 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 3 14:01:34.180832 kubelet[2855]: I0303 14:01:34.179853 2855 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 3 14:01:34.180832 kubelet[2855]: I0303 14:01:34.180369 2855 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 3 14:01:34.180832 kubelet[2855]: I0303 14:01:34.180393 2855 kubelet.go:2501] "Starting kubelet main sync loop" Mar 3 14:01:34.184204 kubelet[2855]: E0303 14:01:34.184174 2855 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 3 14:01:34.284514 kubelet[2855]: E0303 14:01:34.284220 2855 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 3 14:01:34.375535 kubelet[2855]: I0303 14:01:34.375327 2855 cpu_manager.go:225] "Starting" policy="none" Mar 3 14:01:34.375535 kubelet[2855]: I0303 14:01:34.375341 2855 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 3 14:01:34.375535 kubelet[2855]: I0303 14:01:34.375361 2855 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 3 14:01:34.380826 kubelet[2855]: I0303 14:01:34.379989 2855 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 3 14:01:34.380826 kubelet[2855]: I0303 14:01:34.380007 2855 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 3 14:01:34.380826 kubelet[2855]: I0303 14:01:34.380181 2855 policy_none.go:50] "Start" Mar 3 14:01:34.380826 kubelet[2855]: I0303 14:01:34.380191 2855 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 3 14:01:34.380826 kubelet[2855]: I0303 14:01:34.380327 2855 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 3 14:01:34.381511 kubelet[2855]: I0303 14:01:34.380563 2855 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 3 14:01:34.381847 kubelet[2855]: I0303 14:01:34.381559 2855 policy_none.go:44] "Start" Mar 3 14:01:34.432011 kubelet[2855]: E0303 14:01:34.431394 2855 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 3 14:01:34.432011 kubelet[2855]: I0303 14:01:34.431960 2855 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 3 14:01:34.432011 kubelet[2855]: I0303 14:01:34.431972 2855 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 3 14:01:34.432897 kubelet[2855]: I0303 14:01:34.432384 2855 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 3 14:01:34.434554 kubelet[2855]: I0303 14:01:34.433908 2855 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 3 14:01:34.438206 kubelet[2855]: E0303 14:01:34.436388 2855 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 3 14:01:34.472844 containerd[1576]: time="2026-03-03T14:01:34.462412757Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 3 14:01:34.479869 kubelet[2855]: I0303 14:01:34.474504 2855 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 3 14:01:34.485507 kubelet[2855]: I0303 14:01:34.485399 2855 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 3 14:01:34.487732 kubelet[2855]: I0303 14:01:34.487476 2855 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 3 14:01:34.487928 kubelet[2855]: I0303 14:01:34.487921 2855 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 3 14:01:34.515925 kubelet[2855]: E0303 14:01:34.515879 2855 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 3 14:01:34.516040 kubelet[2855]: E0303 14:01:34.515959 2855 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 3 14:01:34.516040 kubelet[2855]: E0303 14:01:34.515993 2855 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 3 14:01:34.526965 kubelet[2855]: I0303 14:01:34.526302 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/506f6b5f3d20a7a0533f945e1fe70f3a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"506f6b5f3d20a7a0533f945e1fe70f3a\") " pod="kube-system/kube-apiserver-localhost" Mar 3 14:01:34.526965 kubelet[2855]: I0303 14:01:34.526457 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/506f6b5f3d20a7a0533f945e1fe70f3a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"506f6b5f3d20a7a0533f945e1fe70f3a\") " pod="kube-system/kube-apiserver-localhost" Mar 3 14:01:34.526965 kubelet[2855]: I0303 14:01:34.526491 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 14:01:34.526965 kubelet[2855]: I0303 14:01:34.526512 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 14:01:34.526965 kubelet[2855]: I0303 14:01:34.526527 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/506f6b5f3d20a7a0533f945e1fe70f3a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"506f6b5f3d20a7a0533f945e1fe70f3a\") " pod="kube-system/kube-apiserver-localhost" Mar 3 14:01:34.527420 kubelet[2855]: I0303 14:01:34.526540 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 14:01:34.527420 kubelet[2855]: I0303 14:01:34.526558 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 14:01:34.527420 kubelet[2855]: I0303 14:01:34.527280 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 14:01:34.527420 kubelet[2855]: I0303 14:01:34.527296 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 3 14:01:34.621914 kubelet[2855]: I0303 14:01:34.621257 2855 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 3 14:01:34.648036 kubelet[2855]: I0303 14:01:34.647892 2855 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Mar 3 14:01:34.650251 kubelet[2855]: I0303 14:01:34.648899 2855 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 3 14:01:34.781989 kubelet[2855]: I0303 14:01:34.781883 2855 apiserver.go:52] "Watching apiserver" Mar 3 14:01:34.817443 kubelet[2855]: E0303 14:01:34.816065 2855 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:01:34.818041 kubelet[2855]: E0303 14:01:34.818018 2855 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:01:34.818414 kubelet[2855]: E0303 14:01:34.818393 2855 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:01:34.834493 kubelet[2855]: I0303 14:01:34.832458 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/80d3f450-930b-4748-a00b-f4631dfb91ca-kube-proxy\") pod \"kube-proxy-wk9sc\" (UID: \"80d3f450-930b-4748-a00b-f4631dfb91ca\") " pod="kube-system/kube-proxy-wk9sc" Mar 3 14:01:34.834493 kubelet[2855]: I0303 14:01:34.832490 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/80d3f450-930b-4748-a00b-f4631dfb91ca-xtables-lock\") pod \"kube-proxy-wk9sc\" (UID: \"80d3f450-930b-4748-a00b-f4631dfb91ca\") " pod="kube-system/kube-proxy-wk9sc" Mar 3 14:01:34.834493 kubelet[2855]: I0303 14:01:34.832509 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/80d3f450-930b-4748-a00b-f4631dfb91ca-lib-modules\") pod \"kube-proxy-wk9sc\" (UID: \"80d3f450-930b-4748-a00b-f4631dfb91ca\") " pod="kube-system/kube-proxy-wk9sc" Mar 3 14:01:34.834493 kubelet[2855]: I0303 14:01:34.832527 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8qlt\" (UniqueName: \"kubernetes.io/projected/80d3f450-930b-4748-a00b-f4631dfb91ca-kube-api-access-l8qlt\") pod \"kube-proxy-wk9sc\" (UID: \"80d3f450-930b-4748-a00b-f4631dfb91ca\") " pod="kube-system/kube-proxy-wk9sc" Mar 3 14:01:34.856461 systemd[1]: Created slice kubepods-besteffort-pod80d3f450_930b_4748_a00b_f4631dfb91ca.slice - libcontainer container kubepods-besteffort-pod80d3f450_930b_4748_a00b_f4631dfb91ca.slice. Mar 3 14:01:34.914138 kubelet[2855]: I0303 14:01:34.913872 2855 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 3 14:01:35.210071 kubelet[2855]: E0303 14:01:35.208977 2855 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:01:35.222211 containerd[1576]: time="2026-03-03T14:01:35.221939070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wk9sc,Uid:80d3f450-930b-4748-a00b-f4631dfb91ca,Namespace:kube-system,Attempt:0,}" Mar 3 14:01:35.335879 kubelet[2855]: E0303 14:01:35.334896 2855 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:01:35.338007 kubelet[2855]: E0303 14:01:35.337480 2855 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:01:35.339188 kubelet[2855]: E0303 14:01:35.338492 2855 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:01:35.636460 containerd[1576]: time="2026-03-03T14:01:35.636162421Z" level=info msg="connecting to shim c6ea6f0a1f24fc42c2dccf56ba718e875ba1c53510ab6c4924faaa4fbc90505a" address="unix:///run/containerd/s/5d33a2a4464db8475ab5df7113ff649caf05f973a0730909f9c6871707bb657f" namespace=k8s.io protocol=ttrpc version=3 Mar 3 14:01:35.960279 kubelet[2855]: I0303 14:01:35.957169 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d1fc4237-b450-4308-8215-fd874f1c4f6a-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-cjrbp\" (UID: \"d1fc4237-b450-4308-8215-fd874f1c4f6a\") " pod="tigera-operator/tigera-operator-6cf4cccc57-cjrbp" Mar 3 14:01:35.960279 kubelet[2855]: I0303 14:01:35.957342 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hd85\" (UniqueName: \"kubernetes.io/projected/d1fc4237-b450-4308-8215-fd874f1c4f6a-kube-api-access-9hd85\") pod \"tigera-operator-6cf4cccc57-cjrbp\" (UID: \"d1fc4237-b450-4308-8215-fd874f1c4f6a\") " pod="tigera-operator/tigera-operator-6cf4cccc57-cjrbp" Mar 3 14:01:35.982999 systemd[1]: Created slice kubepods-besteffort-podd1fc4237_b450_4308_8215_fd874f1c4f6a.slice - libcontainer container kubepods-besteffort-podd1fc4237_b450_4308_8215_fd874f1c4f6a.slice. Mar 3 14:01:36.166020 systemd[1]: Started cri-containerd-c6ea6f0a1f24fc42c2dccf56ba718e875ba1c53510ab6c4924faaa4fbc90505a.scope - libcontainer container c6ea6f0a1f24fc42c2dccf56ba718e875ba1c53510ab6c4924faaa4fbc90505a. Mar 3 14:01:36.318312 containerd[1576]: time="2026-03-03T14:01:36.318108658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-cjrbp,Uid:d1fc4237-b450-4308-8215-fd874f1c4f6a,Namespace:tigera-operator,Attempt:0,}" Mar 3 14:01:36.355094 kubelet[2855]: E0303 14:01:36.353417 2855 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:01:36.360932 kubelet[2855]: E0303 14:01:36.358051 2855 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:01:36.458972 containerd[1576]: time="2026-03-03T14:01:36.458923275Z" level=info msg="connecting to shim f5669f6c65188b8ed104cddf572022f8ee2d97dd5d2eb668c2cdbc560ca3c6f8" address="unix:///run/containerd/s/ea79ec56db69ba6cf14d712c62e0b1b8f947ae951dd116d4c3eadfa6ca348eee" namespace=k8s.io protocol=ttrpc version=3 Mar 3 14:01:36.482992 containerd[1576]: time="2026-03-03T14:01:36.482954138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wk9sc,Uid:80d3f450-930b-4748-a00b-f4631dfb91ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6ea6f0a1f24fc42c2dccf56ba718e875ba1c53510ab6c4924faaa4fbc90505a\"" Mar 3 14:01:36.491281 kubelet[2855]: E0303 14:01:36.490929 2855 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:01:36.530264 containerd[1576]: time="2026-03-03T14:01:36.529551265Z" level=info msg="CreateContainer within sandbox \"c6ea6f0a1f24fc42c2dccf56ba718e875ba1c53510ab6c4924faaa4fbc90505a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 3 14:01:36.613977 containerd[1576]: time="2026-03-03T14:01:36.613881648Z" level=info msg="Container 67bf9b9e0b0a01cbd16cb4c3d98cb3384ae5aca3573f07888bfbb02f9dcef1d4: CDI devices from CRI Config.CDIDevices: []" Mar 3 14:01:36.616383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3945532688.mount: Deactivated successfully. Mar 3 14:01:36.632247 systemd[1]: Started cri-containerd-f5669f6c65188b8ed104cddf572022f8ee2d97dd5d2eb668c2cdbc560ca3c6f8.scope - libcontainer container f5669f6c65188b8ed104cddf572022f8ee2d97dd5d2eb668c2cdbc560ca3c6f8. Mar 3 14:01:36.663253 containerd[1576]: time="2026-03-03T14:01:36.663207494Z" level=info msg="CreateContainer within sandbox \"c6ea6f0a1f24fc42c2dccf56ba718e875ba1c53510ab6c4924faaa4fbc90505a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"67bf9b9e0b0a01cbd16cb4c3d98cb3384ae5aca3573f07888bfbb02f9dcef1d4\"" Mar 3 14:01:36.671941 containerd[1576]: time="2026-03-03T14:01:36.671907323Z" level=info msg="StartContainer for \"67bf9b9e0b0a01cbd16cb4c3d98cb3384ae5aca3573f07888bfbb02f9dcef1d4\"" Mar 3 14:01:36.685266 containerd[1576]: time="2026-03-03T14:01:36.685233432Z" level=info msg="connecting to shim 67bf9b9e0b0a01cbd16cb4c3d98cb3384ae5aca3573f07888bfbb02f9dcef1d4" address="unix:///run/containerd/s/5d33a2a4464db8475ab5df7113ff649caf05f973a0730909f9c6871707bb657f" protocol=ttrpc version=3 Mar 3 14:01:36.835228 systemd[1]: Started cri-containerd-67bf9b9e0b0a01cbd16cb4c3d98cb3384ae5aca3573f07888bfbb02f9dcef1d4.scope - libcontainer container 67bf9b9e0b0a01cbd16cb4c3d98cb3384ae5aca3573f07888bfbb02f9dcef1d4. Mar 3 14:01:37.010386 containerd[1576]: time="2026-03-03T14:01:37.007023599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-cjrbp,Uid:d1fc4237-b450-4308-8215-fd874f1c4f6a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f5669f6c65188b8ed104cddf572022f8ee2d97dd5d2eb668c2cdbc560ca3c6f8\"" Mar 3 14:01:37.045412 containerd[1576]: time="2026-03-03T14:01:37.045193448Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 3 14:01:37.424293 containerd[1576]: time="2026-03-03T14:01:37.422432699Z" level=info msg="StartContainer for \"67bf9b9e0b0a01cbd16cb4c3d98cb3384ae5aca3573f07888bfbb02f9dcef1d4\" returns successfully" Mar 3 14:01:38.424192 kubelet[2855]: E0303 14:01:38.424007 2855 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:01:38.509413 kubelet[2855]: I0303 14:01:38.507319 2855 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-wk9sc" podStartSLOduration=5.507303659 podStartE2EDuration="5.507303659s" podCreationTimestamp="2026-03-03 14:01:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 14:01:38.500370501 +0000 UTC m=+5.380376165" watchObservedRunningTime="2026-03-03 14:01:38.507303659 +0000 UTC m=+5.387309322" Mar 3 14:01:38.921936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount57864045.mount: Deactivated successfully. Mar 3 14:01:39.430197 kubelet[2855]: E0303 14:01:39.429444 2855 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:01:46.741961 containerd[1576]: time="2026-03-03T14:01:46.741465473Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:01:46.744549 containerd[1576]: time="2026-03-03T14:01:46.744187612Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 3 14:01:46.749302 containerd[1576]: time="2026-03-03T14:01:46.749263208Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:01:46.760298 containerd[1576]: time="2026-03-03T14:01:46.759176284Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:01:46.762329 containerd[1576]: time="2026-03-03T14:01:46.761315367Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 9.715773546s" Mar 3 14:01:46.762329 containerd[1576]: time="2026-03-03T14:01:46.762248805Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 3 14:01:46.796005 containerd[1576]: time="2026-03-03T14:01:46.795521962Z" level=info msg="CreateContainer within sandbox \"f5669f6c65188b8ed104cddf572022f8ee2d97dd5d2eb668c2cdbc560ca3c6f8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 3 14:01:46.851040 containerd[1576]: time="2026-03-03T14:01:46.848523468Z" level=info msg="Container e6914d545e51b63eb3926a14f882d363af26a249bd978379b5c2cd57c25d5c90: CDI devices from CRI Config.CDIDevices: []" Mar 3 14:01:46.849053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2171562651.mount: Deactivated successfully. Mar 3 14:01:46.900757 containerd[1576]: time="2026-03-03T14:01:46.900125178Z" level=info msg="CreateContainer within sandbox \"f5669f6c65188b8ed104cddf572022f8ee2d97dd5d2eb668c2cdbc560ca3c6f8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e6914d545e51b63eb3926a14f882d363af26a249bd978379b5c2cd57c25d5c90\"" Mar 3 14:01:46.907266 containerd[1576]: time="2026-03-03T14:01:46.906387853Z" level=info msg="StartContainer for \"e6914d545e51b63eb3926a14f882d363af26a249bd978379b5c2cd57c25d5c90\"" Mar 3 14:01:46.929367 containerd[1576]: time="2026-03-03T14:01:46.929057540Z" level=info msg="connecting to shim e6914d545e51b63eb3926a14f882d363af26a249bd978379b5c2cd57c25d5c90" address="unix:///run/containerd/s/ea79ec56db69ba6cf14d712c62e0b1b8f947ae951dd116d4c3eadfa6ca348eee" protocol=ttrpc version=3 Mar 3 14:01:47.114296 systemd[1]: Started cri-containerd-e6914d545e51b63eb3926a14f882d363af26a249bd978379b5c2cd57c25d5c90.scope - libcontainer container e6914d545e51b63eb3926a14f882d363af26a249bd978379b5c2cd57c25d5c90. Mar 3 14:01:47.464521 containerd[1576]: time="2026-03-03T14:01:47.464274322Z" level=info msg="StartContainer for \"e6914d545e51b63eb3926a14f882d363af26a249bd978379b5c2cd57c25d5c90\" returns successfully" Mar 3 14:01:47.554372 kubelet[2855]: I0303 14:01:47.553295 2855 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-cjrbp" podStartSLOduration=2.817284952 podStartE2EDuration="12.553278425s" podCreationTimestamp="2026-03-03 14:01:35 +0000 UTC" firstStartedPulling="2026-03-03 14:01:37.034485679 +0000 UTC m=+3.914491342" lastFinishedPulling="2026-03-03 14:01:46.770479152 +0000 UTC m=+13.650484815" observedRunningTime="2026-03-03 14:01:47.549117295 +0000 UTC m=+14.429122968" watchObservedRunningTime="2026-03-03 14:01:47.553278425 +0000 UTC m=+14.433284098" Mar 3 14:02:02.465234 kubelet[2855]: E0303 14:02:02.444241 2855 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.905s" Mar 3 14:02:06.209396 sudo[1810]: pam_unix(sudo:session): session closed for user root Mar 3 14:02:06.256449 sshd[1809]: Connection closed by 10.0.0.1 port 52770 Mar 3 14:02:06.280460 sshd-session[1806]: pam_unix(sshd:session): session closed for user core Mar 3 14:02:06.365164 systemd[1]: sshd@8-10.0.0.115:22-10.0.0.1:52770.service: Deactivated successfully. Mar 3 14:02:06.395506 systemd[1]: session-9.scope: Deactivated successfully. Mar 3 14:02:06.397360 systemd[1]: session-9.scope: Consumed 12.881s CPU time, 227.7M memory peak. Mar 3 14:02:06.541076 systemd-logind[1548]: Session 9 logged out. Waiting for processes to exit. Mar 3 14:02:06.556458 systemd-logind[1548]: Removed session 9. Mar 3 14:02:12.032168 systemd[1]: Created slice kubepods-besteffort-podf9d79c22_8384_4d35_a1ee_b584445e7a55.slice - libcontainer container kubepods-besteffort-podf9d79c22_8384_4d35_a1ee_b584445e7a55.slice. Mar 3 14:02:12.164089 kubelet[2855]: I0303 14:02:12.162985 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76746\" (UniqueName: \"kubernetes.io/projected/f9d79c22-8384-4d35-a1ee-b584445e7a55-kube-api-access-76746\") pod \"calico-typha-6b8c56f8d8-jptt6\" (UID: \"f9d79c22-8384-4d35-a1ee-b584445e7a55\") " pod="calico-system/calico-typha-6b8c56f8d8-jptt6" Mar 3 14:02:12.165292 kubelet[2855]: I0303 14:02:12.165009 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9d79c22-8384-4d35-a1ee-b584445e7a55-tigera-ca-bundle\") pod \"calico-typha-6b8c56f8d8-jptt6\" (UID: \"f9d79c22-8384-4d35-a1ee-b584445e7a55\") " pod="calico-system/calico-typha-6b8c56f8d8-jptt6" Mar 3 14:02:12.165292 kubelet[2855]: I0303 14:02:12.165040 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f9d79c22-8384-4d35-a1ee-b584445e7a55-typha-certs\") pod \"calico-typha-6b8c56f8d8-jptt6\" (UID: \"f9d79c22-8384-4d35-a1ee-b584445e7a55\") " pod="calico-system/calico-typha-6b8c56f8d8-jptt6" Mar 3 14:02:12.261238 systemd[1]: Created slice kubepods-besteffort-pod3fe78e43_84db_409f_9c2e_4cb87c764752.slice - libcontainer container kubepods-besteffort-pod3fe78e43_84db_409f_9c2e_4cb87c764752.slice. Mar 3 14:02:12.372463 kubelet[2855]: I0303 14:02:12.370339 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3fe78e43-84db-409f-9c2e-4cb87c764752-node-certs\") pod \"calico-node-7n5cl\" (UID: \"3fe78e43-84db-409f-9c2e-4cb87c764752\") " pod="calico-system/calico-node-7n5cl" Mar 3 14:02:12.374886 kubelet[2855]: I0303 14:02:12.374860 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3fe78e43-84db-409f-9c2e-4cb87c764752-cni-bin-dir\") pod \"calico-node-7n5cl\" (UID: \"3fe78e43-84db-409f-9c2e-4cb87c764752\") " pod="calico-system/calico-node-7n5cl" Mar 3 14:02:12.375136 kubelet[2855]: I0303 14:02:12.375115 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3fe78e43-84db-409f-9c2e-4cb87c764752-cni-net-dir\") pod \"calico-node-7n5cl\" (UID: \"3fe78e43-84db-409f-9c2e-4cb87c764752\") " pod="calico-system/calico-node-7n5cl" Mar 3 14:02:12.375237 kubelet[2855]: I0303 14:02:12.375216 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/3fe78e43-84db-409f-9c2e-4cb87c764752-nodeproc\") pod \"calico-node-7n5cl\" (UID: \"3fe78e43-84db-409f-9c2e-4cb87c764752\") " pod="calico-system/calico-node-7n5cl" Mar 3 14:02:12.375348 kubelet[2855]: I0303 14:02:12.375328 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3fe78e43-84db-409f-9c2e-4cb87c764752-xtables-lock\") pod \"calico-node-7n5cl\" (UID: \"3fe78e43-84db-409f-9c2e-4cb87c764752\") " pod="calico-system/calico-node-7n5cl" Mar 3 14:02:12.387886 kubelet[2855]: I0303 14:02:12.383563 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh8xf\" (UniqueName: \"kubernetes.io/projected/3fe78e43-84db-409f-9c2e-4cb87c764752-kube-api-access-sh8xf\") pod \"calico-node-7n5cl\" (UID: \"3fe78e43-84db-409f-9c2e-4cb87c764752\") " pod="calico-system/calico-node-7n5cl" Mar 3 14:02:12.388081 kubelet[2855]: I0303 14:02:12.388060 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/3fe78e43-84db-409f-9c2e-4cb87c764752-bpffs\") pod \"calico-node-7n5cl\" (UID: \"3fe78e43-84db-409f-9c2e-4cb87c764752\") " pod="calico-system/calico-node-7n5cl" Mar 3 14:02:12.402117 kubelet[2855]: I0303 14:02:12.402080 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3fe78e43-84db-409f-9c2e-4cb87c764752-tigera-ca-bundle\") pod \"calico-node-7n5cl\" (UID: \"3fe78e43-84db-409f-9c2e-4cb87c764752\") " pod="calico-system/calico-node-7n5cl" Mar 3 14:02:12.405122 kubelet[2855]: I0303 14:02:12.405093 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3fe78e43-84db-409f-9c2e-4cb87c764752-var-lib-calico\") pod \"calico-node-7n5cl\" (UID: \"3fe78e43-84db-409f-9c2e-4cb87c764752\") " pod="calico-system/calico-node-7n5cl" Mar 3 14:02:12.405260 kubelet[2855]: I0303 14:02:12.405240 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3fe78e43-84db-409f-9c2e-4cb87c764752-cni-log-dir\") pod \"calico-node-7n5cl\" (UID: \"3fe78e43-84db-409f-9c2e-4cb87c764752\") " pod="calico-system/calico-node-7n5cl" Mar 3 14:02:12.405352 kubelet[2855]: I0303 14:02:12.405334 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3fe78e43-84db-409f-9c2e-4cb87c764752-lib-modules\") pod \"calico-node-7n5cl\" (UID: \"3fe78e43-84db-409f-9c2e-4cb87c764752\") " pod="calico-system/calico-node-7n5cl" Mar 3 14:02:12.405445 kubelet[2855]: I0303 14:02:12.405427 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3fe78e43-84db-409f-9c2e-4cb87c764752-flexvol-driver-host\") pod \"calico-node-7n5cl\" (UID: \"3fe78e43-84db-409f-9c2e-4cb87c764752\") " pod="calico-system/calico-node-7n5cl" Mar 3 14:02:12.410035 kubelet[2855]: I0303 14:02:12.406109 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3fe78e43-84db-409f-9c2e-4cb87c764752-policysync\") pod \"calico-node-7n5cl\" (UID: \"3fe78e43-84db-409f-9c2e-4cb87c764752\") " pod="calico-system/calico-node-7n5cl" Mar 3 14:02:12.410035 kubelet[2855]: I0303 14:02:12.406142 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/3fe78e43-84db-409f-9c2e-4cb87c764752-sys-fs\") pod \"calico-node-7n5cl\" (UID: \"3fe78e43-84db-409f-9c2e-4cb87c764752\") " pod="calico-system/calico-node-7n5cl" Mar 3 14:02:12.410035 kubelet[2855]: I0303 14:02:12.406168 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3fe78e43-84db-409f-9c2e-4cb87c764752-var-run-calico\") pod \"calico-node-7n5cl\" (UID: \"3fe78e43-84db-409f-9c2e-4cb87c764752\") " pod="calico-system/calico-node-7n5cl" Mar 3 14:02:12.470256 kubelet[2855]: E0303 14:02:12.470205 2855 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xk9vk" podUID="c228fc2b-0000-4d3b-b679-8086e76c78a4" Mar 3 14:02:12.508917 kubelet[2855]: I0303 14:02:12.507020 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c228fc2b-0000-4d3b-b679-8086e76c78a4-registration-dir\") pod \"csi-node-driver-xk9vk\" (UID: \"c228fc2b-0000-4d3b-b679-8086e76c78a4\") " pod="calico-system/csi-node-driver-xk9vk" Mar 3 14:02:12.519837 kubelet[2855]: I0303 14:02:12.509424 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpjsg\" (UniqueName: \"kubernetes.io/projected/c228fc2b-0000-4d3b-b679-8086e76c78a4-kube-api-access-wpjsg\") pod \"csi-node-driver-xk9vk\" (UID: \"c228fc2b-0000-4d3b-b679-8086e76c78a4\") " pod="calico-system/csi-node-driver-xk9vk" Mar 3 14:02:12.519837 kubelet[2855]: I0303 14:02:12.509476 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c228fc2b-0000-4d3b-b679-8086e76c78a4-varrun\") pod \"csi-node-driver-xk9vk\" (UID: \"c228fc2b-0000-4d3b-b679-8086e76c78a4\") " pod="calico-system/csi-node-driver-xk9vk" Mar 3 14:02:12.519837 kubelet[2855]: I0303 14:02:12.509553 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c228fc2b-0000-4d3b-b679-8086e76c78a4-socket-dir\") pod \"csi-node-driver-xk9vk\" (UID: \"c228fc2b-0000-4d3b-b679-8086e76c78a4\") " pod="calico-system/csi-node-driver-xk9vk" Mar 3 14:02:12.519837 kubelet[2855]: I0303 14:02:12.510015 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c228fc2b-0000-4d3b-b679-8086e76c78a4-kubelet-dir\") pod \"csi-node-driver-xk9vk\" (UID: \"c228fc2b-0000-4d3b-b679-8086e76c78a4\") " pod="calico-system/csi-node-driver-xk9vk" Mar 3 14:02:12.524165 kubelet[2855]: E0303 14:02:12.523480 2855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 14:02:12.528300 kubelet[2855]: W0303 14:02:12.528151 2855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 14:02:12.528387 kubelet[2855]: E0303 14:02:12.528310 2855 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 14:02:12.530515 kubelet[2855]: E0303 14:02:12.530345 2855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 14:02:12.530515 kubelet[2855]: W0303 14:02:12.530513 2855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 14:02:12.531136 kubelet[2855]: E0303 14:02:12.530534 2855 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 14:02:12.600375 kubelet[2855]: E0303 14:02:12.600343 2855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 14:02:12.600559 kubelet[2855]: W0303 14:02:12.600535 2855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 14:02:12.604507 kubelet[2855]: E0303 14:02:12.603528 2855 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 14:02:12.606112 kubelet[2855]: E0303 14:02:12.606093 2855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 14:02:12.608005 kubelet[2855]: W0303 14:02:12.607982 2855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 14:02:12.608099 kubelet[2855]: E0303 14:02:12.608085 2855 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 14:02:12.619857 kubelet[2855]: E0303 14:02:12.617282 2855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 14:02:12.619857 kubelet[2855]: W0303 14:02:12.617307 2855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 14:02:12.619857 kubelet[2855]: E0303 14:02:12.617333 2855 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 14:02:12.619857 kubelet[2855]: E0303 14:02:12.618262 2855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 14:02:12.619857 kubelet[2855]: W0303 14:02:12.618273 2855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 14:02:12.619857 kubelet[2855]: E0303 14:02:12.618286 2855 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 14:02:12.621879 kubelet[2855]: E0303 14:02:12.620974 2855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 14:02:12.621879 kubelet[2855]: W0303 14:02:12.620991 2855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 14:02:12.621879 kubelet[2855]: E0303 14:02:12.621003 2855 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 14:02:12.621879 kubelet[2855]: E0303 14:02:12.621258 2855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 14:02:12.621879 kubelet[2855]: W0303 14:02:12.621271 2855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 14:02:12.621879 kubelet[2855]: E0303 14:02:12.621284 2855 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 14:02:12.621879 kubelet[2855]: E0303 14:02:12.621534 2855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 14:02:12.621879 kubelet[2855]: W0303 14:02:12.621546 2855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 14:02:12.621879 kubelet[2855]: E0303 14:02:12.621561 2855 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 14:02:12.626152 kubelet[2855]: E0303 14:02:12.623359 2855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 14:02:12.626152 kubelet[2855]: W0303 14:02:12.623494 2855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 14:02:12.626152 kubelet[2855]: E0303 14:02:12.623513 2855 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 14:02:12.633152 kubelet[2855]: E0303 14:02:12.629950 2855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 14:02:12.633152 kubelet[2855]: W0303 14:02:12.629970 2855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 14:02:12.633152 kubelet[2855]: E0303 14:02:12.629990 2855 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 14:02:12.633152 kubelet[2855]: E0303 14:02:12.631383 2855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 14:02:12.633152 kubelet[2855]: W0303 14:02:12.631395 2855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 14:02:12.633152 kubelet[2855]: E0303 14:02:12.631408 2855 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 14:02:12.633152 kubelet[2855]: E0303 14:02:12.632936 2855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 14:02:12.633152 kubelet[2855]: W0303 14:02:12.632947 2855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 14:02:12.633152 kubelet[2855]: E0303 14:02:12.632961 2855 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 14:02:12.643159 kubelet[2855]: E0303 14:02:12.633349 2855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 14:02:12.643159 kubelet[2855]: W0303 14:02:12.633359 2855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 14:02:12.643159 kubelet[2855]: E0303 14:02:12.633370 2855 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 14:02:12.643159 kubelet[2855]: E0303 14:02:12.635498 2855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 14:02:12.643159 kubelet[2855]: W0303 14:02:12.635510 2855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 14:02:12.643159 kubelet[2855]: E0303 14:02:12.635526 2855 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 14:02:12.643159 kubelet[2855]: E0303 14:02:12.636057 2855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 14:02:12.643159 kubelet[2855]: W0303 14:02:12.636068 2855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 14:02:12.643159 kubelet[2855]: E0303 14:02:12.636078 2855 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 14:02:12.643159 kubelet[2855]: E0303 14:02:12.636315 2855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 14:02:12.651977 kubelet[2855]: W0303 14:02:12.636327 2855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 14:02:12.651977 kubelet[2855]: E0303 14:02:12.636338 2855 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 14:02:12.651977 kubelet[2855]: E0303 14:02:12.636933 2855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 14:02:12.651977 kubelet[2855]: W0303 14:02:12.636945 2855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 14:02:12.651977 kubelet[2855]: E0303 14:02:12.636956 2855 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 14:02:12.651977 kubelet[2855]: E0303 14:02:12.638193 2855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 14:02:12.651977 kubelet[2855]: W0303 14:02:12.638205 2855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 14:02:12.651977 kubelet[2855]: E0303 14:02:12.638219 2855 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 14:02:12.651977 kubelet[2855]: E0303 14:02:12.640102 2855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 14:02:12.651977 kubelet[2855]: W0303 14:02:12.640113 2855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 14:02:12.653157 kubelet[2855]: E0303 14:02:12.640125 2855 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 14:02:12.653157 kubelet[2855]: E0303 14:02:12.642227 2855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 14:02:12.653157 kubelet[2855]: W0303 14:02:12.642241 2855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 14:02:12.653157 kubelet[2855]: E0303 14:02:12.642254 2855 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 14:02:12.653157 kubelet[2855]: E0303 14:02:12.642535 2855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 14:02:12.653157 kubelet[2855]: W0303 14:02:12.642546 2855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 14:02:12.653157 kubelet[2855]: E0303 14:02:12.642559 2855 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 14:02:12.653157 kubelet[2855]: E0303 14:02:12.647214 2855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 14:02:12.653157 kubelet[2855]: W0303 14:02:12.647233 2855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 14:02:12.653157 kubelet[2855]: E0303 14:02:12.647250 2855 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 14:02:12.659310 kubelet[2855]: E0303 14:02:12.647491 2855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 14:02:12.659310 kubelet[2855]: W0303 14:02:12.647504 2855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 14:02:12.659310 kubelet[2855]: E0303 14:02:12.647519 2855 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 14:02:12.659310 kubelet[2855]: E0303 14:02:12.648171 2855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 14:02:12.659310 kubelet[2855]: W0303 14:02:12.648182 2855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 14:02:12.659310 kubelet[2855]: E0303 14:02:12.648193 2855 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 14:02:12.659310 kubelet[2855]: E0303 14:02:12.648472 2855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 14:02:12.659310 kubelet[2855]: W0303 14:02:12.648483 2855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 14:02:12.659310 kubelet[2855]: E0303 14:02:12.648496 2855 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 14:02:12.659310 kubelet[2855]: E0303 14:02:12.649878 2855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 14:02:12.662542 kubelet[2855]: W0303 14:02:12.649889 2855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 14:02:12.662542 kubelet[2855]: E0303 14:02:12.649900 2855 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 14:02:12.662542 kubelet[2855]: E0303 14:02:12.650391 2855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 14:02:12.662542 kubelet[2855]: W0303 14:02:12.650402 2855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 14:02:12.662542 kubelet[2855]: E0303 14:02:12.650413 2855 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 14:02:12.662542 kubelet[2855]: E0303 14:02:12.652558 2855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 14:02:12.662542 kubelet[2855]: W0303 14:02:12.652926 2855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 14:02:12.662542 kubelet[2855]: E0303 14:02:12.652943 2855 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 14:02:12.680015 kubelet[2855]: E0303 14:02:12.678548 2855 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:02:12.683942 containerd[1576]: time="2026-03-03T14:02:12.681509882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b8c56f8d8-jptt6,Uid:f9d79c22-8384-4d35-a1ee-b584445e7a55,Namespace:calico-system,Attempt:0,}" Mar 3 14:02:12.765275 kubelet[2855]: E0303 14:02:12.764520 2855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 14:02:12.765275 kubelet[2855]: W0303 14:02:12.765098 2855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 14:02:12.765275 kubelet[2855]: E0303 14:02:12.765125 2855 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 14:02:12.926439 containerd[1576]: time="2026-03-03T14:02:12.923379228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7n5cl,Uid:3fe78e43-84db-409f-9c2e-4cb87c764752,Namespace:calico-system,Attempt:0,}" Mar 3 14:02:12.994407 containerd[1576]: time="2026-03-03T14:02:12.994117259Z" level=info msg="connecting to shim d127cfd137a593767e6f36a4465da3442b97fb4c8d10908163d7ad6788dbf1ae" address="unix:///run/containerd/s/639aa5b3d13d7c1833ee45ecf2c7abaf8e70b72d30773d5c863a3906d708e5dd" namespace=k8s.io protocol=ttrpc version=3 Mar 3 14:02:13.158926 containerd[1576]: time="2026-03-03T14:02:13.155175834Z" level=info msg="connecting to shim a891386f6eb641952f3cca7771ce42daca5a285e5bc303c0c26f1ef353e7b0fc" address="unix:///run/containerd/s/dce41e445b3e86e2a88cd2e5bd7a971448780139cdda31b367c24a23d1193f0a" namespace=k8s.io protocol=ttrpc version=3 Mar 3 14:02:13.338482 systemd[1]: Started cri-containerd-d127cfd137a593767e6f36a4465da3442b97fb4c8d10908163d7ad6788dbf1ae.scope - libcontainer container d127cfd137a593767e6f36a4465da3442b97fb4c8d10908163d7ad6788dbf1ae. Mar 3 14:02:13.447382 systemd[1]: Started cri-containerd-a891386f6eb641952f3cca7771ce42daca5a285e5bc303c0c26f1ef353e7b0fc.scope - libcontainer container a891386f6eb641952f3cca7771ce42daca5a285e5bc303c0c26f1ef353e7b0fc. Mar 3 14:02:13.711236 containerd[1576]: time="2026-03-03T14:02:13.710418632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7n5cl,Uid:3fe78e43-84db-409f-9c2e-4cb87c764752,Namespace:calico-system,Attempt:0,} returns sandbox id \"a891386f6eb641952f3cca7771ce42daca5a285e5bc303c0c26f1ef353e7b0fc\"" Mar 3 14:02:13.718183 containerd[1576]: time="2026-03-03T14:02:13.717532207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b8c56f8d8-jptt6,Uid:f9d79c22-8384-4d35-a1ee-b584445e7a55,Namespace:calico-system,Attempt:0,} returns sandbox id \"d127cfd137a593767e6f36a4465da3442b97fb4c8d10908163d7ad6788dbf1ae\"" Mar 3 14:02:13.720938 kubelet[2855]: E0303 14:02:13.720180 2855 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:02:13.721352 containerd[1576]: time="2026-03-03T14:02:13.721186752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 3 14:02:14.189477 kubelet[2855]: E0303 14:02:14.189320 2855 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xk9vk" podUID="c228fc2b-0000-4d3b-b679-8086e76c78a4" Mar 3 14:02:15.662445 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2585452266.mount: Deactivated successfully. Mar 3 14:02:16.048979 containerd[1576]: time="2026-03-03T14:02:16.048720384Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:02:16.054014 containerd[1576]: time="2026-03-03T14:02:16.053968153Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=6186433" Mar 3 14:02:16.057287 containerd[1576]: time="2026-03-03T14:02:16.056864960Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:02:16.065346 containerd[1576]: time="2026-03-03T14:02:16.064803180Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:02:16.065490 containerd[1576]: time="2026-03-03T14:02:16.065464158Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 2.344250992s" Mar 3 14:02:16.065944 containerd[1576]: time="2026-03-03T14:02:16.065556099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 3 14:02:16.073319 containerd[1576]: time="2026-03-03T14:02:16.072404608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 3 14:02:16.089896 containerd[1576]: time="2026-03-03T14:02:16.089244887Z" level=info msg="CreateContainer within sandbox \"a891386f6eb641952f3cca7771ce42daca5a285e5bc303c0c26f1ef353e7b0fc\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 3 14:02:16.142407 containerd[1576]: time="2026-03-03T14:02:16.142003830Z" level=info msg="Container bc39c78d60c2fe052ce5386757fd90b4fe3a8a9ecbd915fd89b2d5c7aff332ef: CDI devices from CRI Config.CDIDevices: []" Mar 3 14:02:16.171475 containerd[1576]: time="2026-03-03T14:02:16.170886888Z" level=info msg="CreateContainer within sandbox \"a891386f6eb641952f3cca7771ce42daca5a285e5bc303c0c26f1ef353e7b0fc\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"bc39c78d60c2fe052ce5386757fd90b4fe3a8a9ecbd915fd89b2d5c7aff332ef\"" Mar 3 14:02:16.180085 containerd[1576]: time="2026-03-03T14:02:16.178045967Z" level=info msg="StartContainer for \"bc39c78d60c2fe052ce5386757fd90b4fe3a8a9ecbd915fd89b2d5c7aff332ef\"" Mar 3 14:02:16.182779 kubelet[2855]: E0303 14:02:16.181879 2855 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xk9vk" podUID="c228fc2b-0000-4d3b-b679-8086e76c78a4" Mar 3 14:02:16.193684 containerd[1576]: time="2026-03-03T14:02:16.193074946Z" level=info msg="connecting to shim bc39c78d60c2fe052ce5386757fd90b4fe3a8a9ecbd915fd89b2d5c7aff332ef" address="unix:///run/containerd/s/dce41e445b3e86e2a88cd2e5bd7a971448780139cdda31b367c24a23d1193f0a" protocol=ttrpc version=3 Mar 3 14:02:16.327960 systemd[1]: Started cri-containerd-bc39c78d60c2fe052ce5386757fd90b4fe3a8a9ecbd915fd89b2d5c7aff332ef.scope - libcontainer container bc39c78d60c2fe052ce5386757fd90b4fe3a8a9ecbd915fd89b2d5c7aff332ef. Mar 3 14:02:16.883417 containerd[1576]: time="2026-03-03T14:02:16.882865852Z" level=info msg="StartContainer for \"bc39c78d60c2fe052ce5386757fd90b4fe3a8a9ecbd915fd89b2d5c7aff332ef\" returns successfully" Mar 3 14:02:16.967115 systemd[1]: cri-containerd-bc39c78d60c2fe052ce5386757fd90b4fe3a8a9ecbd915fd89b2d5c7aff332ef.scope: Deactivated successfully. Mar 3 14:02:16.991980 containerd[1576]: time="2026-03-03T14:02:16.991402537Z" level=info msg="received container exit event container_id:\"bc39c78d60c2fe052ce5386757fd90b4fe3a8a9ecbd915fd89b2d5c7aff332ef\" id:\"bc39c78d60c2fe052ce5386757fd90b4fe3a8a9ecbd915fd89b2d5c7aff332ef\" pid:3429 exited_at:{seconds:1772546536 nanos:982351777}" Mar 3 14:02:17.148481 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc39c78d60c2fe052ce5386757fd90b4fe3a8a9ecbd915fd89b2d5c7aff332ef-rootfs.mount: Deactivated successfully. Mar 3 14:02:18.188878 kubelet[2855]: E0303 14:02:18.185855 2855 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xk9vk" podUID="c228fc2b-0000-4d3b-b679-8086e76c78a4" Mar 3 14:02:20.184299 kubelet[2855]: E0303 14:02:20.183488 2855 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xk9vk" podUID="c228fc2b-0000-4d3b-b679-8086e76c78a4" Mar 3 14:02:22.182971 kubelet[2855]: E0303 14:02:22.182508 2855 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xk9vk" podUID="c228fc2b-0000-4d3b-b679-8086e76c78a4" Mar 3 14:02:22.652115 containerd[1576]: time="2026-03-03T14:02:22.651556838Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:02:22.656446 containerd[1576]: time="2026-03-03T14:02:22.656414057Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=34551413" Mar 3 14:02:22.660380 containerd[1576]: time="2026-03-03T14:02:22.660349189Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:02:22.667189 containerd[1576]: time="2026-03-03T14:02:22.666350521Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:02:22.667189 containerd[1576]: time="2026-03-03T14:02:22.667079993Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 6.594526762s" Mar 3 14:02:22.667189 containerd[1576]: time="2026-03-03T14:02:22.667109722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 3 14:02:22.671423 containerd[1576]: time="2026-03-03T14:02:22.671397913Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 3 14:02:22.759124 containerd[1576]: time="2026-03-03T14:02:22.759086262Z" level=info msg="CreateContainer within sandbox \"d127cfd137a593767e6f36a4465da3442b97fb4c8d10908163d7ad6788dbf1ae\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 3 14:02:22.795209 containerd[1576]: time="2026-03-03T14:02:22.794514492Z" level=info msg="Container eeeb22e2f9beee936d7d8698d37fe52bc17517d5936ca118ac846a47b8083bc7: CDI devices from CRI Config.CDIDevices: []" Mar 3 14:02:22.797130 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount309223484.mount: Deactivated successfully. Mar 3 14:02:22.836166 containerd[1576]: time="2026-03-03T14:02:22.835487471Z" level=info msg="CreateContainer within sandbox \"d127cfd137a593767e6f36a4465da3442b97fb4c8d10908163d7ad6788dbf1ae\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"eeeb22e2f9beee936d7d8698d37fe52bc17517d5936ca118ac846a47b8083bc7\"" Mar 3 14:02:22.843276 containerd[1576]: time="2026-03-03T14:02:22.843057471Z" level=info msg="StartContainer for \"eeeb22e2f9beee936d7d8698d37fe52bc17517d5936ca118ac846a47b8083bc7\"" Mar 3 14:02:22.846533 containerd[1576]: time="2026-03-03T14:02:22.846505401Z" level=info msg="connecting to shim eeeb22e2f9beee936d7d8698d37fe52bc17517d5936ca118ac846a47b8083bc7" address="unix:///run/containerd/s/639aa5b3d13d7c1833ee45ecf2c7abaf8e70b72d30773d5c863a3906d708e5dd" protocol=ttrpc version=3 Mar 3 14:02:22.942521 systemd[1]: Started cri-containerd-eeeb22e2f9beee936d7d8698d37fe52bc17517d5936ca118ac846a47b8083bc7.scope - libcontainer container eeeb22e2f9beee936d7d8698d37fe52bc17517d5936ca118ac846a47b8083bc7. Mar 3 14:02:23.236494 containerd[1576]: time="2026-03-03T14:02:23.236386200Z" level=info msg="StartContainer for \"eeeb22e2f9beee936d7d8698d37fe52bc17517d5936ca118ac846a47b8083bc7\" returns successfully" Mar 3 14:02:23.500077 kubelet[2855]: E0303 14:02:23.499333 2855 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:02:24.183500 kubelet[2855]: E0303 14:02:24.182552 2855 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xk9vk" podUID="c228fc2b-0000-4d3b-b679-8086e76c78a4" Mar 3 14:02:24.506371 kubelet[2855]: E0303 14:02:24.503540 2855 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:02:24.545718 kubelet[2855]: I0303 14:02:24.543990 2855 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-6b8c56f8d8-jptt6" podStartSLOduration=4.600183908 podStartE2EDuration="13.543977376s" podCreationTimestamp="2026-03-03 14:02:11 +0000 UTC" firstStartedPulling="2026-03-03 14:02:13.726163834 +0000 UTC m=+40.606169497" lastFinishedPulling="2026-03-03 14:02:22.669957302 +0000 UTC m=+49.549962965" observedRunningTime="2026-03-03 14:02:23.5833532 +0000 UTC m=+50.463358863" watchObservedRunningTime="2026-03-03 14:02:24.543977376 +0000 UTC m=+51.423983039" Mar 3 14:02:25.511843 kubelet[2855]: E0303 14:02:25.511481 2855 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:02:26.181970 kubelet[2855]: E0303 14:02:26.181530 2855 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xk9vk" podUID="c228fc2b-0000-4d3b-b679-8086e76c78a4" Mar 3 14:02:26.517513 kubelet[2855]: E0303 14:02:26.517061 2855 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:02:28.182506 kubelet[2855]: E0303 14:02:28.182070 2855 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xk9vk" podUID="c228fc2b-0000-4d3b-b679-8086e76c78a4" Mar 3 14:02:30.182472 kubelet[2855]: E0303 14:02:30.182411 2855 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xk9vk" podUID="c228fc2b-0000-4d3b-b679-8086e76c78a4" Mar 3 14:02:32.192208 kubelet[2855]: E0303 14:02:32.182556 2855 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xk9vk" podUID="c228fc2b-0000-4d3b-b679-8086e76c78a4" Mar 3 14:02:34.186345 kubelet[2855]: E0303 14:02:34.185191 2855 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xk9vk" podUID="c228fc2b-0000-4d3b-b679-8086e76c78a4" Mar 3 14:02:36.185538 kubelet[2855]: E0303 14:02:36.184299 2855 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xk9vk" podUID="c228fc2b-0000-4d3b-b679-8086e76c78a4" Mar 3 14:02:38.184933 kubelet[2855]: E0303 14:02:38.184102 2855 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xk9vk" podUID="c228fc2b-0000-4d3b-b679-8086e76c78a4" Mar 3 14:02:40.186883 kubelet[2855]: E0303 14:02:40.186219 2855 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xk9vk" podUID="c228fc2b-0000-4d3b-b679-8086e76c78a4" Mar 3 14:02:42.192146 kubelet[2855]: E0303 14:02:42.191492 2855 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xk9vk" podUID="c228fc2b-0000-4d3b-b679-8086e76c78a4" Mar 3 14:02:44.195789 kubelet[2855]: E0303 14:02:44.195141 2855 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xk9vk" podUID="c228fc2b-0000-4d3b-b679-8086e76c78a4" Mar 3 14:02:46.182191 kubelet[2855]: E0303 14:02:46.182132 2855 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xk9vk" podUID="c228fc2b-0000-4d3b-b679-8086e76c78a4" Mar 3 14:02:48.199492 kubelet[2855]: E0303 14:02:48.199438 2855 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xk9vk" podUID="c228fc2b-0000-4d3b-b679-8086e76c78a4" Mar 3 14:02:48.719558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount113709540.mount: Deactivated successfully. Mar 3 14:02:48.867002 containerd[1576]: time="2026-03-03T14:02:48.866051694Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:02:48.872417 containerd[1576]: time="2026-03-03T14:02:48.871935222Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 3 14:02:48.874888 containerd[1576]: time="2026-03-03T14:02:48.874847021Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:02:49.027417 containerd[1576]: time="2026-03-03T14:02:49.026949827Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:02:49.028719 containerd[1576]: time="2026-03-03T14:02:49.028404712Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 26.356893126s" Mar 3 14:02:49.029080 containerd[1576]: time="2026-03-03T14:02:49.028912818Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 3 14:02:49.050040 containerd[1576]: time="2026-03-03T14:02:49.049198289Z" level=info msg="CreateContainer within sandbox \"a891386f6eb641952f3cca7771ce42daca5a285e5bc303c0c26f1ef353e7b0fc\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 3 14:02:49.146019 containerd[1576]: time="2026-03-03T14:02:49.142567279Z" level=info msg="Container de5eac9c7046c8f8a602bc091d0e2e438c25c408a186b188aaccbda333713b37: CDI devices from CRI Config.CDIDevices: []" Mar 3 14:02:49.183419 kubelet[2855]: E0303 14:02:49.182095 2855 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:02:49.324443 containerd[1576]: time="2026-03-03T14:02:49.323168495Z" level=info msg="CreateContainer within sandbox \"a891386f6eb641952f3cca7771ce42daca5a285e5bc303c0c26f1ef353e7b0fc\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"de5eac9c7046c8f8a602bc091d0e2e438c25c408a186b188aaccbda333713b37\"" Mar 3 14:02:49.333948 containerd[1576]: time="2026-03-03T14:02:49.332174936Z" level=info msg="StartContainer for \"de5eac9c7046c8f8a602bc091d0e2e438c25c408a186b188aaccbda333713b37\"" Mar 3 14:02:49.345536 containerd[1576]: time="2026-03-03T14:02:49.341544745Z" level=info msg="connecting to shim de5eac9c7046c8f8a602bc091d0e2e438c25c408a186b188aaccbda333713b37" address="unix:///run/containerd/s/dce41e445b3e86e2a88cd2e5bd7a971448780139cdda31b367c24a23d1193f0a" protocol=ttrpc version=3 Mar 3 14:02:49.546011 systemd[1]: Started cri-containerd-de5eac9c7046c8f8a602bc091d0e2e438c25c408a186b188aaccbda333713b37.scope - libcontainer container de5eac9c7046c8f8a602bc091d0e2e438c25c408a186b188aaccbda333713b37. Mar 3 14:02:50.036771 containerd[1576]: time="2026-03-03T14:02:50.036447716Z" level=info msg="StartContainer for \"de5eac9c7046c8f8a602bc091d0e2e438c25c408a186b188aaccbda333713b37\" returns successfully" Mar 3 14:02:50.183450 kubelet[2855]: E0303 14:02:50.182477 2855 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xk9vk" podUID="c228fc2b-0000-4d3b-b679-8086e76c78a4" Mar 3 14:02:50.318020 systemd[1]: cri-containerd-de5eac9c7046c8f8a602bc091d0e2e438c25c408a186b188aaccbda333713b37.scope: Deactivated successfully. Mar 3 14:02:50.565122 containerd[1576]: time="2026-03-03T14:02:50.565046525Z" level=info msg="received container exit event container_id:\"de5eac9c7046c8f8a602bc091d0e2e438c25c408a186b188aaccbda333713b37\" id:\"de5eac9c7046c8f8a602bc091d0e2e438c25c408a186b188aaccbda333713b37\" pid:3538 exited_at:{seconds:1772546570 nanos:509321109}" Mar 3 14:02:50.735503 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de5eac9c7046c8f8a602bc091d0e2e438c25c408a186b188aaccbda333713b37-rootfs.mount: Deactivated successfully. Mar 3 14:02:51.819975 containerd[1576]: time="2026-03-03T14:02:51.819930570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 3 14:02:52.188331 kubelet[2855]: E0303 14:02:52.182175 2855 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xk9vk" podUID="c228fc2b-0000-4d3b-b679-8086e76c78a4" Mar 3 14:02:54.194278 kubelet[2855]: E0303 14:02:54.192160 2855 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xk9vk" podUID="c228fc2b-0000-4d3b-b679-8086e76c78a4" Mar 3 14:02:56.211432 kubelet[2855]: E0303 14:02:56.211233 2855 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xk9vk" podUID="c228fc2b-0000-4d3b-b679-8086e76c78a4" Mar 3 14:02:57.196976 kubelet[2855]: E0303 14:02:57.194316 2855 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:02:58.182227 kubelet[2855]: E0303 14:02:58.182175 2855 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xk9vk" podUID="c228fc2b-0000-4d3b-b679-8086e76c78a4" Mar 3 14:03:00.182279 kubelet[2855]: E0303 14:03:00.182223 2855 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xk9vk" podUID="c228fc2b-0000-4d3b-b679-8086e76c78a4" Mar 3 14:03:02.181527 kubelet[2855]: E0303 14:03:02.181457 2855 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xk9vk" podUID="c228fc2b-0000-4d3b-b679-8086e76c78a4" Mar 3 14:03:04.183519 kubelet[2855]: E0303 14:03:04.183239 2855 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xk9vk" podUID="c228fc2b-0000-4d3b-b679-8086e76c78a4" Mar 3 14:03:05.184300 kubelet[2855]: E0303 14:03:05.184008 2855 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:03:05.198920 kubelet[2855]: E0303 14:03:05.197960 2855 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:03:05.848062 containerd[1576]: time="2026-03-03T14:03:05.847029981Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:03:05.851048 containerd[1576]: time="2026-03-03T14:03:05.850311278Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 3 14:03:05.855446 containerd[1576]: time="2026-03-03T14:03:05.855112287Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:03:05.865465 containerd[1576]: time="2026-03-03T14:03:05.865169527Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:03:05.877434 containerd[1576]: time="2026-03-03T14:03:05.877254834Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 14.056080775s" Mar 3 14:03:05.877434 containerd[1576]: time="2026-03-03T14:03:05.877312165Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 3 14:03:05.911923 containerd[1576]: time="2026-03-03T14:03:05.910358072Z" level=info msg="CreateContainer within sandbox \"a891386f6eb641952f3cca7771ce42daca5a285e5bc303c0c26f1ef353e7b0fc\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 3 14:03:06.014382 containerd[1576]: time="2026-03-03T14:03:06.014324810Z" level=info msg="Container 4d7d906e02fc53be93109ae3e2865afc7216b3e8ead0e697a0998720fa8c7ee5: CDI devices from CRI Config.CDIDevices: []" Mar 3 14:03:06.113177 containerd[1576]: time="2026-03-03T14:03:06.111922389Z" level=info msg="CreateContainer within sandbox \"a891386f6eb641952f3cca7771ce42daca5a285e5bc303c0c26f1ef353e7b0fc\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4d7d906e02fc53be93109ae3e2865afc7216b3e8ead0e697a0998720fa8c7ee5\"" Mar 3 14:03:06.118853 containerd[1576]: time="2026-03-03T14:03:06.117142191Z" level=info msg="StartContainer for \"4d7d906e02fc53be93109ae3e2865afc7216b3e8ead0e697a0998720fa8c7ee5\"" Mar 3 14:03:06.150281 containerd[1576]: time="2026-03-03T14:03:06.150232355Z" level=info msg="connecting to shim 4d7d906e02fc53be93109ae3e2865afc7216b3e8ead0e697a0998720fa8c7ee5" address="unix:///run/containerd/s/dce41e445b3e86e2a88cd2e5bd7a971448780139cdda31b367c24a23d1193f0a" protocol=ttrpc version=3 Mar 3 14:03:06.194926 kubelet[2855]: E0303 14:03:06.193411 2855 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xk9vk" podUID="c228fc2b-0000-4d3b-b679-8086e76c78a4" Mar 3 14:03:06.294024 systemd[1]: Started cri-containerd-4d7d906e02fc53be93109ae3e2865afc7216b3e8ead0e697a0998720fa8c7ee5.scope - libcontainer container 4d7d906e02fc53be93109ae3e2865afc7216b3e8ead0e697a0998720fa8c7ee5. Mar 3 14:03:06.680956 containerd[1576]: time="2026-03-03T14:03:06.677936400Z" level=info msg="StartContainer for \"4d7d906e02fc53be93109ae3e2865afc7216b3e8ead0e697a0998720fa8c7ee5\" returns successfully" Mar 3 14:03:08.188153 kubelet[2855]: E0303 14:03:08.187256 2855 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xk9vk" podUID="c228fc2b-0000-4d3b-b679-8086e76c78a4" Mar 3 14:03:09.263091 systemd[1]: cri-containerd-4d7d906e02fc53be93109ae3e2865afc7216b3e8ead0e697a0998720fa8c7ee5.scope: Deactivated successfully. Mar 3 14:03:09.263507 systemd[1]: cri-containerd-4d7d906e02fc53be93109ae3e2865afc7216b3e8ead0e697a0998720fa8c7ee5.scope: Consumed 3.032s CPU time, 182.3M memory peak, 4.4M read from disk, 177M written to disk. Mar 3 14:03:09.271205 containerd[1576]: time="2026-03-03T14:03:09.269911752Z" level=info msg="received container exit event container_id:\"4d7d906e02fc53be93109ae3e2865afc7216b3e8ead0e697a0998720fa8c7ee5\" id:\"4d7d906e02fc53be93109ae3e2865afc7216b3e8ead0e697a0998720fa8c7ee5\" pid:3597 exited_at:{seconds:1772546589 nanos:267022726}" Mar 3 14:03:09.374391 kubelet[2855]: I0303 14:03:09.373567 2855 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 3 14:03:09.472186 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d7d906e02fc53be93109ae3e2865afc7216b3e8ead0e697a0998720fa8c7ee5-rootfs.mount: Deactivated successfully. Mar 3 14:03:09.716556 systemd[1]: Created slice kubepods-besteffort-podc5130667_7949_42cd_8cf1_169b3aece1e9.slice - libcontainer container kubepods-besteffort-podc5130667_7949_42cd_8cf1_169b3aece1e9.slice. Mar 3 14:03:09.758316 systemd[1]: Created slice kubepods-besteffort-pod28e246eb_3fd8_4f45_9264_380dc4fa62c5.slice - libcontainer container kubepods-besteffort-pod28e246eb_3fd8_4f45_9264_380dc4fa62c5.slice. Mar 3 14:03:09.818539 systemd[1]: Created slice kubepods-burstable-poda6da2aeb_b230_45f3_8292_e23a3c17d60c.slice - libcontainer container kubepods-burstable-poda6da2aeb_b230_45f3_8292_e23a3c17d60c.slice. Mar 3 14:03:09.827467 kubelet[2855]: I0303 14:03:09.826519 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nmhw\" (UniqueName: \"kubernetes.io/projected/28e246eb-3fd8-4f45-9264-380dc4fa62c5-kube-api-access-4nmhw\") pod \"whisker-7bc5886b4c-b9f66\" (UID: \"28e246eb-3fd8-4f45-9264-380dc4fa62c5\") " pod="calico-system/whisker-7bc5886b4c-b9f66" Mar 3 14:03:09.827467 kubelet[2855]: I0303 14:03:09.827358 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c5130667-7949-42cd-8cf1-169b3aece1e9-calico-apiserver-certs\") pod \"calico-apiserver-6677b978bd-gbpdl\" (UID: \"c5130667-7949-42cd-8cf1-169b3aece1e9\") " pod="calico-system/calico-apiserver-6677b978bd-gbpdl" Mar 3 14:03:09.827467 kubelet[2855]: I0303 14:03:09.827392 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/28e246eb-3fd8-4f45-9264-380dc4fa62c5-nginx-config\") pod \"whisker-7bc5886b4c-b9f66\" (UID: \"28e246eb-3fd8-4f45-9264-380dc4fa62c5\") " pod="calico-system/whisker-7bc5886b4c-b9f66" Mar 3 14:03:09.827467 kubelet[2855]: I0303 14:03:09.827424 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sll8j\" (UniqueName: \"kubernetes.io/projected/c5130667-7949-42cd-8cf1-169b3aece1e9-kube-api-access-sll8j\") pod \"calico-apiserver-6677b978bd-gbpdl\" (UID: \"c5130667-7949-42cd-8cf1-169b3aece1e9\") " pod="calico-system/calico-apiserver-6677b978bd-gbpdl" Mar 3 14:03:09.827467 kubelet[2855]: I0303 14:03:09.827440 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/28e246eb-3fd8-4f45-9264-380dc4fa62c5-whisker-backend-key-pair\") pod \"whisker-7bc5886b4c-b9f66\" (UID: \"28e246eb-3fd8-4f45-9264-380dc4fa62c5\") " pod="calico-system/whisker-7bc5886b4c-b9f66" Mar 3 14:03:09.828282 kubelet[2855]: I0303 14:03:09.827454 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28e246eb-3fd8-4f45-9264-380dc4fa62c5-whisker-ca-bundle\") pod \"whisker-7bc5886b4c-b9f66\" (UID: \"28e246eb-3fd8-4f45-9264-380dc4fa62c5\") " pod="calico-system/whisker-7bc5886b4c-b9f66" Mar 3 14:03:09.880144 systemd[1]: Created slice kubepods-besteffort-pod6516ca51_7391_44dd_b25b_3ff46412e8d5.slice - libcontainer container kubepods-besteffort-pod6516ca51_7391_44dd_b25b_3ff46412e8d5.slice. Mar 3 14:03:09.920341 systemd[1]: Created slice kubepods-besteffort-pod61665e7b_9fb3_4659_b57b_d6e2b5ad54ac.slice - libcontainer container kubepods-besteffort-pod61665e7b_9fb3_4659_b57b_d6e2b5ad54ac.slice. Mar 3 14:03:09.930447 kubelet[2855]: I0303 14:03:09.930231 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6da2aeb-b230-45f3-8292-e23a3c17d60c-config-volume\") pod \"coredns-7d764666f9-8dpx6\" (UID: \"a6da2aeb-b230-45f3-8292-e23a3c17d60c\") " pod="kube-system/coredns-7d764666f9-8dpx6" Mar 3 14:03:09.930447 kubelet[2855]: I0303 14:03:09.930283 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6516ca51-7391-44dd-b25b-3ff46412e8d5-tigera-ca-bundle\") pod \"calico-kube-controllers-6df4cc67f5-nnxg6\" (UID: \"6516ca51-7391-44dd-b25b-3ff46412e8d5\") " pod="calico-system/calico-kube-controllers-6df4cc67f5-nnxg6" Mar 3 14:03:09.930447 kubelet[2855]: I0303 14:03:09.930318 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvf4l\" (UniqueName: \"kubernetes.io/projected/6516ca51-7391-44dd-b25b-3ff46412e8d5-kube-api-access-xvf4l\") pod \"calico-kube-controllers-6df4cc67f5-nnxg6\" (UID: \"6516ca51-7391-44dd-b25b-3ff46412e8d5\") " pod="calico-system/calico-kube-controllers-6df4cc67f5-nnxg6" Mar 3 14:03:09.930447 kubelet[2855]: I0303 14:03:09.930341 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z82x\" (UniqueName: \"kubernetes.io/projected/39dff308-ff46-444b-bddd-0b45b42e0715-kube-api-access-5z82x\") pod \"coredns-7d764666f9-vvdpm\" (UID: \"39dff308-ff46-444b-bddd-0b45b42e0715\") " pod="kube-system/coredns-7d764666f9-vvdpm" Mar 3 14:03:09.930447 kubelet[2855]: I0303 14:03:09.930370 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpscr\" (UniqueName: \"kubernetes.io/projected/61665e7b-9fb3-4659-b57b-d6e2b5ad54ac-kube-api-access-wpscr\") pod \"calico-apiserver-6677b978bd-vb4zq\" (UID: \"61665e7b-9fb3-4659-b57b-d6e2b5ad54ac\") " pod="calico-system/calico-apiserver-6677b978bd-vb4zq" Mar 3 14:03:09.933322 kubelet[2855]: I0303 14:03:09.930528 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39dff308-ff46-444b-bddd-0b45b42e0715-config-volume\") pod \"coredns-7d764666f9-vvdpm\" (UID: \"39dff308-ff46-444b-bddd-0b45b42e0715\") " pod="kube-system/coredns-7d764666f9-vvdpm" Mar 3 14:03:09.933322 kubelet[2855]: I0303 14:03:09.930553 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndm2b\" (UniqueName: \"kubernetes.io/projected/a6da2aeb-b230-45f3-8292-e23a3c17d60c-kube-api-access-ndm2b\") pod \"coredns-7d764666f9-8dpx6\" (UID: \"a6da2aeb-b230-45f3-8292-e23a3c17d60c\") " pod="kube-system/coredns-7d764666f9-8dpx6" Mar 3 14:03:09.937914 kubelet[2855]: I0303 14:03:09.937021 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/61665e7b-9fb3-4659-b57b-d6e2b5ad54ac-calico-apiserver-certs\") pod \"calico-apiserver-6677b978bd-vb4zq\" (UID: \"61665e7b-9fb3-4659-b57b-d6e2b5ad54ac\") " pod="calico-system/calico-apiserver-6677b978bd-vb4zq" Mar 3 14:03:09.937914 kubelet[2855]: I0303 14:03:09.937067 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c967439-ed8f-40a6-ac45-3c8bab198902-config\") pod \"goldmane-9f7667bb8-wxxg6\" (UID: \"2c967439-ed8f-40a6-ac45-3c8bab198902\") " pod="calico-system/goldmane-9f7667bb8-wxxg6" Mar 3 14:03:09.937914 kubelet[2855]: I0303 14:03:09.937089 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8zjb\" (UniqueName: \"kubernetes.io/projected/2c967439-ed8f-40a6-ac45-3c8bab198902-kube-api-access-s8zjb\") pod \"goldmane-9f7667bb8-wxxg6\" (UID: \"2c967439-ed8f-40a6-ac45-3c8bab198902\") " pod="calico-system/goldmane-9f7667bb8-wxxg6" Mar 3 14:03:09.937914 kubelet[2855]: I0303 14:03:09.937152 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c967439-ed8f-40a6-ac45-3c8bab198902-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-wxxg6\" (UID: \"2c967439-ed8f-40a6-ac45-3c8bab198902\") " pod="calico-system/goldmane-9f7667bb8-wxxg6" Mar 3 14:03:09.937914 kubelet[2855]: I0303 14:03:09.937178 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/2c967439-ed8f-40a6-ac45-3c8bab198902-goldmane-key-pair\") pod \"goldmane-9f7667bb8-wxxg6\" (UID: \"2c967439-ed8f-40a6-ac45-3c8bab198902\") " pod="calico-system/goldmane-9f7667bb8-wxxg6" Mar 3 14:03:09.952069 systemd[1]: Created slice kubepods-burstable-pod39dff308_ff46_444b_bddd_0b45b42e0715.slice - libcontainer container kubepods-burstable-pod39dff308_ff46_444b_bddd_0b45b42e0715.slice. Mar 3 14:03:10.063193 systemd[1]: Created slice kubepods-besteffort-pod2c967439_ed8f_40a6_ac45_3c8bab198902.slice - libcontainer container kubepods-besteffort-pod2c967439_ed8f_40a6_ac45_3c8bab198902.slice. Mar 3 14:03:10.235031 systemd[1]: Created slice kubepods-besteffort-podc228fc2b_0000_4d3b_b679_8086e76c78a4.slice - libcontainer container kubepods-besteffort-podc228fc2b_0000_4d3b_b679_8086e76c78a4.slice. Mar 3 14:03:10.258374 containerd[1576]: time="2026-03-03T14:03:10.258288987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6677b978bd-vb4zq,Uid:61665e7b-9fb3-4659-b57b-d6e2b5ad54ac,Namespace:calico-system,Attempt:0,}" Mar 3 14:03:10.260756 containerd[1576]: time="2026-03-03T14:03:10.260379242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xk9vk,Uid:c228fc2b-0000-4d3b-b679-8086e76c78a4,Namespace:calico-system,Attempt:0,}" Mar 3 14:03:10.335475 kubelet[2855]: E0303 14:03:10.330175 2855 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:03:10.348227 containerd[1576]: time="2026-03-03T14:03:10.348162018Z" level=info msg="CreateContainer within sandbox \"a891386f6eb641952f3cca7771ce42daca5a285e5bc303c0c26f1ef353e7b0fc\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 3 14:03:10.349506 containerd[1576]: time="2026-03-03T14:03:10.349483700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-vvdpm,Uid:39dff308-ff46-444b-bddd-0b45b42e0715,Namespace:kube-system,Attempt:0,}" Mar 3 14:03:10.360956 containerd[1576]: time="2026-03-03T14:03:10.357992363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6677b978bd-gbpdl,Uid:c5130667-7949-42cd-8cf1-169b3aece1e9,Namespace:calico-system,Attempt:0,}" Mar 3 14:03:10.411460 containerd[1576]: time="2026-03-03T14:03:10.410084662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bc5886b4c-b9f66,Uid:28e246eb-3fd8-4f45-9264-380dc4fa62c5,Namespace:calico-system,Attempt:0,}" Mar 3 14:03:10.458350 kubelet[2855]: E0303 14:03:10.457000 2855 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:03:10.461460 containerd[1576]: time="2026-03-03T14:03:10.461419224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-8dpx6,Uid:a6da2aeb-b230-45f3-8292-e23a3c17d60c,Namespace:kube-system,Attempt:0,}" Mar 3 14:03:10.462479 containerd[1576]: time="2026-03-03T14:03:10.462442378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-wxxg6,Uid:2c967439-ed8f-40a6-ac45-3c8bab198902,Namespace:calico-system,Attempt:0,}" Mar 3 14:03:10.547342 containerd[1576]: time="2026-03-03T14:03:10.547270187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6df4cc67f5-nnxg6,Uid:6516ca51-7391-44dd-b25b-3ff46412e8d5,Namespace:calico-system,Attempt:0,}" Mar 3 14:03:10.776002 containerd[1576]: time="2026-03-03T14:03:10.775095123Z" level=info msg="Container c13700329aa20c4cb81b9ac628ffa45774ced144376d022d2493499b770694ef: CDI devices from CRI Config.CDIDevices: []" Mar 3 14:03:11.021030 containerd[1576]: time="2026-03-03T14:03:11.020217407Z" level=info msg="CreateContainer within sandbox \"a891386f6eb641952f3cca7771ce42daca5a285e5bc303c0c26f1ef353e7b0fc\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c13700329aa20c4cb81b9ac628ffa45774ced144376d022d2493499b770694ef\"" Mar 3 14:03:11.025111 containerd[1576]: time="2026-03-03T14:03:11.025062986Z" level=info msg="StartContainer for \"c13700329aa20c4cb81b9ac628ffa45774ced144376d022d2493499b770694ef\"" Mar 3 14:03:11.043366 containerd[1576]: time="2026-03-03T14:03:11.043103758Z" level=info msg="connecting to shim c13700329aa20c4cb81b9ac628ffa45774ced144376d022d2493499b770694ef" address="unix:///run/containerd/s/dce41e445b3e86e2a88cd2e5bd7a971448780139cdda31b367c24a23d1193f0a" protocol=ttrpc version=3 Mar 3 14:03:11.248235 systemd[1]: Started cri-containerd-c13700329aa20c4cb81b9ac628ffa45774ced144376d022d2493499b770694ef.scope - libcontainer container c13700329aa20c4cb81b9ac628ffa45774ced144376d022d2493499b770694ef. Mar 3 14:03:11.407351 containerd[1576]: time="2026-03-03T14:03:11.406332989Z" level=error msg="Failed to destroy network for sandbox \"a9331a39f6e6ddff0040ccee4d35a011dbb51ad17bf7437c0d17d6ff54591b06\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 14:03:11.458140 containerd[1576]: time="2026-03-03T14:03:11.458068744Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xk9vk,Uid:c228fc2b-0000-4d3b-b679-8086e76c78a4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9331a39f6e6ddff0040ccee4d35a011dbb51ad17bf7437c0d17d6ff54591b06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 14:03:11.476379 systemd[1]: run-netns-cni\x2d0717425b\x2d4d76\x2d9aee\x2d2667\x2d3456ecd35f3d.mount: Deactivated successfully. Mar 3 14:03:11.566435 kubelet[2855]: E0303 14:03:11.566082 2855 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9331a39f6e6ddff0040ccee4d35a011dbb51ad17bf7437c0d17d6ff54591b06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 14:03:11.566435 kubelet[2855]: E0303 14:03:11.566349 2855 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9331a39f6e6ddff0040ccee4d35a011dbb51ad17bf7437c0d17d6ff54591b06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xk9vk" Mar 3 14:03:11.566435 kubelet[2855]: E0303 14:03:11.566381 2855 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9331a39f6e6ddff0040ccee4d35a011dbb51ad17bf7437c0d17d6ff54591b06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xk9vk" Mar 3 14:03:11.632461 kubelet[2855]: E0303 14:03:11.566451 2855 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xk9vk_calico-system(c228fc2b-0000-4d3b-b679-8086e76c78a4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xk9vk_calico-system(c228fc2b-0000-4d3b-b679-8086e76c78a4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a9331a39f6e6ddff0040ccee4d35a011dbb51ad17bf7437c0d17d6ff54591b06\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xk9vk" podUID="c228fc2b-0000-4d3b-b679-8086e76c78a4" Mar 3 14:03:11.703162 containerd[1576]: time="2026-03-03T14:03:11.703034327Z" level=error msg="Failed to destroy network for sandbox \"90fc0873b5143ee9f8c0b4942965ff897eed5ca2e30c27aa5884d8294325fd11\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 14:03:11.713531 systemd[1]: run-netns-cni\x2d4e48e0cf\x2d755c\x2de0d4\x2d5191\x2dd19b36b96d07.mount: Deactivated successfully. Mar 3 14:03:11.732380 containerd[1576]: time="2026-03-03T14:03:11.732315708Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6677b978bd-vb4zq,Uid:61665e7b-9fb3-4659-b57b-d6e2b5ad54ac,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"90fc0873b5143ee9f8c0b4942965ff897eed5ca2e30c27aa5884d8294325fd11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 14:03:11.739259 kubelet[2855]: E0303 14:03:11.737876 2855 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90fc0873b5143ee9f8c0b4942965ff897eed5ca2e30c27aa5884d8294325fd11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 14:03:11.739259 kubelet[2855]: E0303 14:03:11.738212 2855 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90fc0873b5143ee9f8c0b4942965ff897eed5ca2e30c27aa5884d8294325fd11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6677b978bd-vb4zq" Mar 3 14:03:11.739259 kubelet[2855]: E0303 14:03:11.738243 2855 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90fc0873b5143ee9f8c0b4942965ff897eed5ca2e30c27aa5884d8294325fd11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6677b978bd-vb4zq" Mar 3 14:03:11.739447 kubelet[2855]: E0303 14:03:11.738310 2855 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6677b978bd-vb4zq_calico-system(61665e7b-9fb3-4659-b57b-d6e2b5ad54ac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6677b978bd-vb4zq_calico-system(61665e7b-9fb3-4659-b57b-d6e2b5ad54ac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90fc0873b5143ee9f8c0b4942965ff897eed5ca2e30c27aa5884d8294325fd11\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6677b978bd-vb4zq" podUID="61665e7b-9fb3-4659-b57b-d6e2b5ad54ac" Mar 3 14:03:11.741238 containerd[1576]: time="2026-03-03T14:03:11.741197526Z" level=error msg="Failed to destroy network for sandbox \"2975d71075e31e9c4df89fef4cf5f6603719a0d36a2c07f6083e9c63b43ded40\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 14:03:11.754155 systemd[1]: run-netns-cni\x2d085fde7b\x2d3b6e\x2d03b2\x2d6646\x2d10218369ab65.mount: Deactivated successfully. Mar 3 14:03:11.809367 containerd[1576]: time="2026-03-03T14:03:11.809170594Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-vvdpm,Uid:39dff308-ff46-444b-bddd-0b45b42e0715,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2975d71075e31e9c4df89fef4cf5f6603719a0d36a2c07f6083e9c63b43ded40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 14:03:11.812044 kubelet[2855]: E0303 14:03:11.811187 2855 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2975d71075e31e9c4df89fef4cf5f6603719a0d36a2c07f6083e9c63b43ded40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 14:03:11.812044 kubelet[2855]: E0303 14:03:11.811378 2855 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2975d71075e31e9c4df89fef4cf5f6603719a0d36a2c07f6083e9c63b43ded40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-vvdpm" Mar 3 14:03:11.812044 kubelet[2855]: E0303 14:03:11.811400 2855 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2975d71075e31e9c4df89fef4cf5f6603719a0d36a2c07f6083e9c63b43ded40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-vvdpm" Mar 3 14:03:11.812202 kubelet[2855]: E0303 14:03:11.811451 2855 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-vvdpm_kube-system(39dff308-ff46-444b-bddd-0b45b42e0715)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-vvdpm_kube-system(39dff308-ff46-444b-bddd-0b45b42e0715)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2975d71075e31e9c4df89fef4cf5f6603719a0d36a2c07f6083e9c63b43ded40\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-vvdpm" podUID="39dff308-ff46-444b-bddd-0b45b42e0715" Mar 3 14:03:12.000557 containerd[1576]: time="2026-03-03T14:03:11.991451302Z" level=error msg="Failed to destroy network for sandbox \"56f5991869dd0328628e2609598e109274859da7d5fabb2e25fc2a0b664dc734\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 14:03:12.010054 systemd[1]: run-netns-cni\x2d23e6b23d\x2dbab5\x2d7c9e\x2d68db\x2dd1ddcfaf5801.mount: Deactivated successfully. Mar 3 14:03:12.030119 containerd[1576]: time="2026-03-03T14:03:12.029380232Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-8dpx6,Uid:a6da2aeb-b230-45f3-8292-e23a3c17d60c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"56f5991869dd0328628e2609598e109274859da7d5fabb2e25fc2a0b664dc734\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 14:03:12.046244 kubelet[2855]: E0303 14:03:12.032826 2855 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56f5991869dd0328628e2609598e109274859da7d5fabb2e25fc2a0b664dc734\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 14:03:12.046244 kubelet[2855]: E0303 14:03:12.032896 2855 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56f5991869dd0328628e2609598e109274859da7d5fabb2e25fc2a0b664dc734\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-8dpx6" Mar 3 14:03:12.046244 kubelet[2855]: E0303 14:03:12.033065 2855 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56f5991869dd0328628e2609598e109274859da7d5fabb2e25fc2a0b664dc734\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-8dpx6" Mar 3 14:03:12.046438 kubelet[2855]: E0303 14:03:12.033129 2855 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-8dpx6_kube-system(a6da2aeb-b230-45f3-8292-e23a3c17d60c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-8dpx6_kube-system(a6da2aeb-b230-45f3-8292-e23a3c17d60c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"56f5991869dd0328628e2609598e109274859da7d5fabb2e25fc2a0b664dc734\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-8dpx6" podUID="a6da2aeb-b230-45f3-8292-e23a3c17d60c" Mar 3 14:03:12.075296 containerd[1576]: time="2026-03-03T14:03:12.071370900Z" level=error msg="Failed to destroy network for sandbox \"44505eeb85f9d49539bd4f78e7bfb2867d537a5be87403a52e198fab7a787852\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 14:03:12.086493 systemd[1]: run-netns-cni\x2dbd974ff9\x2d28bb\x2d40f3\x2d13a9\x2dc32135c9beed.mount: Deactivated successfully. Mar 3 14:03:12.103350 containerd[1576]: time="2026-03-03T14:03:12.103278003Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6677b978bd-gbpdl,Uid:c5130667-7949-42cd-8cf1-169b3aece1e9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"44505eeb85f9d49539bd4f78e7bfb2867d537a5be87403a52e198fab7a787852\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 14:03:12.121177 containerd[1576]: time="2026-03-03T14:03:12.119342851Z" level=error msg="Failed to destroy network for sandbox \"fe71e0e03a95c8b596a655a1f54661372c40ea96bc64f1dc6ff565e7df76e17d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 14:03:12.131350 containerd[1576]: time="2026-03-03T14:03:12.131171400Z" level=error msg="Failed to destroy network for sandbox \"45bde1f95331c9078f696aa41bd053072ebdc90d957ae615d948f6efe2ccf139\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 14:03:12.132536 containerd[1576]: time="2026-03-03T14:03:12.131290250Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-wxxg6,Uid:2c967439-ed8f-40a6-ac45-3c8bab198902,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe71e0e03a95c8b596a655a1f54661372c40ea96bc64f1dc6ff565e7df76e17d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 14:03:12.139313 containerd[1576]: time="2026-03-03T14:03:12.137374880Z" level=error msg="Failed to destroy network for sandbox \"502cac8398c78c669a95df87280c31de8c031a5f23d778ae3c25a514341bc835\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 14:03:12.139419 kubelet[2855]: E0303 14:03:12.138538 2855 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44505eeb85f9d49539bd4f78e7bfb2867d537a5be87403a52e198fab7a787852\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 14:03:12.139419 kubelet[2855]: E0303 14:03:12.138858 2855 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44505eeb85f9d49539bd4f78e7bfb2867d537a5be87403a52e198fab7a787852\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6677b978bd-gbpdl" Mar 3 14:03:12.139419 kubelet[2855]: E0303 14:03:12.138887 2855 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44505eeb85f9d49539bd4f78e7bfb2867d537a5be87403a52e198fab7a787852\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6677b978bd-gbpdl" Mar 3 14:03:12.139837 kubelet[2855]: E0303 14:03:12.139086 2855 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6677b978bd-gbpdl_calico-system(c5130667-7949-42cd-8cf1-169b3aece1e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6677b978bd-gbpdl_calico-system(c5130667-7949-42cd-8cf1-169b3aece1e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"44505eeb85f9d49539bd4f78e7bfb2867d537a5be87403a52e198fab7a787852\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6677b978bd-gbpdl" podUID="c5130667-7949-42cd-8cf1-169b3aece1e9" Mar 3 14:03:12.143070 kubelet[2855]: E0303 14:03:12.141131 2855 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe71e0e03a95c8b596a655a1f54661372c40ea96bc64f1dc6ff565e7df76e17d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 14:03:12.143070 kubelet[2855]: E0303 14:03:12.141304 2855 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe71e0e03a95c8b596a655a1f54661372c40ea96bc64f1dc6ff565e7df76e17d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-wxxg6" Mar 3 14:03:12.143070 kubelet[2855]: E0303 14:03:12.141328 2855 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe71e0e03a95c8b596a655a1f54661372c40ea96bc64f1dc6ff565e7df76e17d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-wxxg6" Mar 3 14:03:12.143454 kubelet[2855]: E0303 14:03:12.141381 2855 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-wxxg6_calico-system(2c967439-ed8f-40a6-ac45-3c8bab198902)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-wxxg6_calico-system(2c967439-ed8f-40a6-ac45-3c8bab198902)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe71e0e03a95c8b596a655a1f54661372c40ea96bc64f1dc6ff565e7df76e17d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-wxxg6" podUID="2c967439-ed8f-40a6-ac45-3c8bab198902" Mar 3 14:03:12.151285 containerd[1576]: time="2026-03-03T14:03:12.151229811Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bc5886b4c-b9f66,Uid:28e246eb-3fd8-4f45-9264-380dc4fa62c5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"45bde1f95331c9078f696aa41bd053072ebdc90d957ae615d948f6efe2ccf139\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 14:03:12.155196 kubelet[2855]: E0303 14:03:12.154889 2855 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45bde1f95331c9078f696aa41bd053072ebdc90d957ae615d948f6efe2ccf139\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 14:03:12.155196 kubelet[2855]: E0303 14:03:12.155104 2855 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45bde1f95331c9078f696aa41bd053072ebdc90d957ae615d948f6efe2ccf139\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7bc5886b4c-b9f66" Mar 3 14:03:12.155196 kubelet[2855]: E0303 14:03:12.155129 2855 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45bde1f95331c9078f696aa41bd053072ebdc90d957ae615d948f6efe2ccf139\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7bc5886b4c-b9f66" Mar 3 14:03:12.155487 kubelet[2855]: E0303 14:03:12.155457 2855 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7bc5886b4c-b9f66_calico-system(28e246eb-3fd8-4f45-9264-380dc4fa62c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7bc5886b4c-b9f66_calico-system(28e246eb-3fd8-4f45-9264-380dc4fa62c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"45bde1f95331c9078f696aa41bd053072ebdc90d957ae615d948f6efe2ccf139\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7bc5886b4c-b9f66" podUID="28e246eb-3fd8-4f45-9264-380dc4fa62c5" Mar 3 14:03:12.157333 containerd[1576]: time="2026-03-03T14:03:12.157295214Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6df4cc67f5-nnxg6,Uid:6516ca51-7391-44dd-b25b-3ff46412e8d5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"502cac8398c78c669a95df87280c31de8c031a5f23d778ae3c25a514341bc835\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 14:03:12.159303 kubelet[2855]: E0303 14:03:12.158430 2855 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"502cac8398c78c669a95df87280c31de8c031a5f23d778ae3c25a514341bc835\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 14:03:12.163343 kubelet[2855]: E0303 14:03:12.162379 2855 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"502cac8398c78c669a95df87280c31de8c031a5f23d778ae3c25a514341bc835\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6df4cc67f5-nnxg6" Mar 3 14:03:12.163343 kubelet[2855]: E0303 14:03:12.162829 2855 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"502cac8398c78c669a95df87280c31de8c031a5f23d778ae3c25a514341bc835\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6df4cc67f5-nnxg6" Mar 3 14:03:12.163343 kubelet[2855]: E0303 14:03:12.162911 2855 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6df4cc67f5-nnxg6_calico-system(6516ca51-7391-44dd-b25b-3ff46412e8d5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6df4cc67f5-nnxg6_calico-system(6516ca51-7391-44dd-b25b-3ff46412e8d5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"502cac8398c78c669a95df87280c31de8c031a5f23d778ae3c25a514341bc835\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6df4cc67f5-nnxg6" podUID="6516ca51-7391-44dd-b25b-3ff46412e8d5" Mar 3 14:03:12.268330 containerd[1576]: time="2026-03-03T14:03:12.268113546Z" level=info msg="StartContainer for \"c13700329aa20c4cb81b9ac628ffa45774ced144376d022d2493499b770694ef\" returns successfully" Mar 3 14:03:12.474511 kubelet[2855]: I0303 14:03:12.474266 2855 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-7n5cl" podStartSLOduration=3.964188699 podStartE2EDuration="1m0.474245287s" podCreationTimestamp="2026-03-03 14:02:12 +0000 UTC" firstStartedPulling="2026-03-03 14:02:13.719082334 +0000 UTC m=+40.599087997" lastFinishedPulling="2026-03-03 14:03:10.229138921 +0000 UTC m=+97.109144585" observedRunningTime="2026-03-03 14:03:12.472410236 +0000 UTC m=+99.352415900" watchObservedRunningTime="2026-03-03 14:03:12.474245287 +0000 UTC m=+99.354250970" Mar 3 14:03:12.476096 systemd[1]: run-netns-cni\x2d28fb5a8e\x2dd0c0\x2d82e5\x2da64a\x2d37bdc1657b4c.mount: Deactivated successfully. Mar 3 14:03:12.476274 systemd[1]: run-netns-cni\x2d4bd69492\x2d5073\x2de230\x2da1b4\x2d688def843b5b.mount: Deactivated successfully. Mar 3 14:03:12.476376 systemd[1]: run-netns-cni\x2d2c66e75a\x2d785f\x2db0f9\x2ddd25\x2d4361a76ea8f5.mount: Deactivated successfully. Mar 3 14:03:14.570811 kubelet[2855]: I0303 14:03:14.570488 2855 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/28e246eb-3fd8-4f45-9264-380dc4fa62c5-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28e246eb-3fd8-4f45-9264-380dc4fa62c5-whisker-ca-bundle\") pod \"28e246eb-3fd8-4f45-9264-380dc4fa62c5\" (UID: \"28e246eb-3fd8-4f45-9264-380dc4fa62c5\") " Mar 3 14:03:14.574874 kubelet[2855]: I0303 14:03:14.574822 2855 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/28e246eb-3fd8-4f45-9264-380dc4fa62c5-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/28e246eb-3fd8-4f45-9264-380dc4fa62c5-whisker-backend-key-pair\") pod \"28e246eb-3fd8-4f45-9264-380dc4fa62c5\" (UID: \"28e246eb-3fd8-4f45-9264-380dc4fa62c5\") " Mar 3 14:03:14.576810 kubelet[2855]: I0303 14:03:14.576781 2855 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/28e246eb-3fd8-4f45-9264-380dc4fa62c5-nginx-config\" (UniqueName: \"kubernetes.io/configmap/28e246eb-3fd8-4f45-9264-380dc4fa62c5-nginx-config\") pod \"28e246eb-3fd8-4f45-9264-380dc4fa62c5\" (UID: \"28e246eb-3fd8-4f45-9264-380dc4fa62c5\") " Mar 3 14:03:14.577057 kubelet[2855]: I0303 14:03:14.577035 2855 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/28e246eb-3fd8-4f45-9264-380dc4fa62c5-kube-api-access-4nmhw\" (UniqueName: \"kubernetes.io/projected/28e246eb-3fd8-4f45-9264-380dc4fa62c5-kube-api-access-4nmhw\") pod \"28e246eb-3fd8-4f45-9264-380dc4fa62c5\" (UID: \"28e246eb-3fd8-4f45-9264-380dc4fa62c5\") " Mar 3 14:03:14.589057 kubelet[2855]: I0303 14:03:14.587959 2855 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28e246eb-3fd8-4f45-9264-380dc4fa62c5-whisker-ca-bundle" pod "28e246eb-3fd8-4f45-9264-380dc4fa62c5" (UID: "28e246eb-3fd8-4f45-9264-380dc4fa62c5"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 3 14:03:14.590958 kubelet[2855]: I0303 14:03:14.590919 2855 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28e246eb-3fd8-4f45-9264-380dc4fa62c5-nginx-config" pod "28e246eb-3fd8-4f45-9264-380dc4fa62c5" (UID: "28e246eb-3fd8-4f45-9264-380dc4fa62c5"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 3 14:03:14.635303 systemd[1]: var-lib-kubelet-pods-28e246eb\x2d3fd8\x2d4f45\x2d9264\x2d380dc4fa62c5-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 3 14:03:14.648434 systemd[1]: var-lib-kubelet-pods-28e246eb\x2d3fd8\x2d4f45\x2d9264\x2d380dc4fa62c5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4nmhw.mount: Deactivated successfully. Mar 3 14:03:14.659072 kubelet[2855]: I0303 14:03:14.652466 2855 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28e246eb-3fd8-4f45-9264-380dc4fa62c5-kube-api-access-4nmhw" pod "28e246eb-3fd8-4f45-9264-380dc4fa62c5" (UID: "28e246eb-3fd8-4f45-9264-380dc4fa62c5"). InnerVolumeSpecName "kube-api-access-4nmhw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 3 14:03:14.663895 kubelet[2855]: I0303 14:03:14.659550 2855 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28e246eb-3fd8-4f45-9264-380dc4fa62c5-whisker-backend-key-pair" pod "28e246eb-3fd8-4f45-9264-380dc4fa62c5" (UID: "28e246eb-3fd8-4f45-9264-380dc4fa62c5"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 3 14:03:14.677989 kubelet[2855]: I0303 14:03:14.677911 2855 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28e246eb-3fd8-4f45-9264-380dc4fa62c5-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 3 14:03:14.677989 kubelet[2855]: I0303 14:03:14.677976 2855 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/28e246eb-3fd8-4f45-9264-380dc4fa62c5-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Mar 3 14:03:14.677989 kubelet[2855]: I0303 14:03:14.677992 2855 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/28e246eb-3fd8-4f45-9264-380dc4fa62c5-nginx-config\") on node \"localhost\" DevicePath \"\"" Mar 3 14:03:14.677989 kubelet[2855]: I0303 14:03:14.678004 2855 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4nmhw\" (UniqueName: \"kubernetes.io/projected/28e246eb-3fd8-4f45-9264-380dc4fa62c5-kube-api-access-4nmhw\") on node \"localhost\" DevicePath \"\"" Mar 3 14:03:14.804438 systemd[1]: Removed slice kubepods-besteffort-pod28e246eb_3fd8_4f45_9264_380dc4fa62c5.slice - libcontainer container kubepods-besteffort-pod28e246eb_3fd8_4f45_9264_380dc4fa62c5.slice. Mar 3 14:03:15.145949 systemd[1]: Created slice kubepods-besteffort-pod4040d18c_265e_4c10_b9b5_120a14100c38.slice - libcontainer container kubepods-besteffort-pod4040d18c_265e_4c10_b9b5_120a14100c38.slice. Mar 3 14:03:15.187863 kubelet[2855]: I0303 14:03:15.187115 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4040d18c-265e-4c10-b9b5-120a14100c38-whisker-backend-key-pair\") pod \"whisker-7dd7567d64-pnd7p\" (UID: \"4040d18c-265e-4c10-b9b5-120a14100c38\") " pod="calico-system/whisker-7dd7567d64-pnd7p" Mar 3 14:03:15.187863 kubelet[2855]: I0303 14:03:15.187313 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4040d18c-265e-4c10-b9b5-120a14100c38-whisker-ca-bundle\") pod \"whisker-7dd7567d64-pnd7p\" (UID: \"4040d18c-265e-4c10-b9b5-120a14100c38\") " pod="calico-system/whisker-7dd7567d64-pnd7p" Mar 3 14:03:15.187863 kubelet[2855]: I0303 14:03:15.187341 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/4040d18c-265e-4c10-b9b5-120a14100c38-nginx-config\") pod \"whisker-7dd7567d64-pnd7p\" (UID: \"4040d18c-265e-4c10-b9b5-120a14100c38\") " pod="calico-system/whisker-7dd7567d64-pnd7p" Mar 3 14:03:15.187863 kubelet[2855]: I0303 14:03:15.187374 2855 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdvhh\" (UniqueName: \"kubernetes.io/projected/4040d18c-265e-4c10-b9b5-120a14100c38-kube-api-access-vdvhh\") pod \"whisker-7dd7567d64-pnd7p\" (UID: \"4040d18c-265e-4c10-b9b5-120a14100c38\") " pod="calico-system/whisker-7dd7567d64-pnd7p" Mar 3 14:03:15.486551 containerd[1576]: time="2026-03-03T14:03:15.483356144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7dd7567d64-pnd7p,Uid:4040d18c-265e-4c10-b9b5-120a14100c38,Namespace:calico-system,Attempt:0,}" Mar 3 14:03:16.213868 kubelet[2855]: I0303 14:03:16.213203 2855 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="28e246eb-3fd8-4f45-9264-380dc4fa62c5" path="/var/lib/kubelet/pods/28e246eb-3fd8-4f45-9264-380dc4fa62c5/volumes" Mar 3 14:03:16.505185 systemd-networkd[1477]: cali4a9adcb54c0: Link UP Mar 3 14:03:16.511214 systemd-networkd[1477]: cali4a9adcb54c0: Gained carrier Mar 3 14:03:16.593553 containerd[1576]: 2026-03-03 14:03:15.688 [ERROR][4005] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 3 14:03:16.593553 containerd[1576]: 2026-03-03 14:03:15.882 [INFO][4005] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7dd7567d64--pnd7p-eth0 whisker-7dd7567d64- calico-system 4040d18c-265e-4c10-b9b5-120a14100c38 1117 0 2026-03-03 14:03:15 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7dd7567d64 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7dd7567d64-pnd7p eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali4a9adcb54c0 [] [] }} ContainerID="fccf9d66ee354662819af4485e6aa3afa8835ea8d3024c2ec61df947c891960c" Namespace="calico-system" Pod="whisker-7dd7567d64-pnd7p" WorkloadEndpoint="localhost-k8s-whisker--7dd7567d64--pnd7p-" Mar 3 14:03:16.593553 containerd[1576]: 2026-03-03 14:03:15.882 [INFO][4005] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fccf9d66ee354662819af4485e6aa3afa8835ea8d3024c2ec61df947c891960c" Namespace="calico-system" Pod="whisker-7dd7567d64-pnd7p" WorkloadEndpoint="localhost-k8s-whisker--7dd7567d64--pnd7p-eth0" Mar 3 14:03:16.593553 containerd[1576]: 2026-03-03 14:03:16.201 [INFO][4023] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fccf9d66ee354662819af4485e6aa3afa8835ea8d3024c2ec61df947c891960c" HandleID="k8s-pod-network.fccf9d66ee354662819af4485e6aa3afa8835ea8d3024c2ec61df947c891960c" Workload="localhost-k8s-whisker--7dd7567d64--pnd7p-eth0" Mar 3 14:03:16.595109 containerd[1576]: 2026-03-03 14:03:16.238 [INFO][4023] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="fccf9d66ee354662819af4485e6aa3afa8835ea8d3024c2ec61df947c891960c" HandleID="k8s-pod-network.fccf9d66ee354662819af4485e6aa3afa8835ea8d3024c2ec61df947c891960c" Workload="localhost-k8s-whisker--7dd7567d64--pnd7p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000323860), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7dd7567d64-pnd7p", "timestamp":"2026-03-03 14:03:16.201151892 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000442420)} Mar 3 14:03:16.595109 containerd[1576]: 2026-03-03 14:03:16.238 [INFO][4023] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 14:03:16.595109 containerd[1576]: 2026-03-03 14:03:16.238 [INFO][4023] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 14:03:16.595109 containerd[1576]: 2026-03-03 14:03:16.238 [INFO][4023] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 3 14:03:16.595109 containerd[1576]: 2026-03-03 14:03:16.251 [INFO][4023] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.fccf9d66ee354662819af4485e6aa3afa8835ea8d3024c2ec61df947c891960c" host="localhost" Mar 3 14:03:16.595109 containerd[1576]: 2026-03-03 14:03:16.277 [INFO][4023] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 3 14:03:16.595109 containerd[1576]: 2026-03-03 14:03:16.302 [INFO][4023] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 3 14:03:16.595109 containerd[1576]: 2026-03-03 14:03:16.316 [INFO][4023] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 3 14:03:16.595109 containerd[1576]: 2026-03-03 14:03:16.332 [INFO][4023] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 3 14:03:16.595109 containerd[1576]: 2026-03-03 14:03:16.333 [INFO][4023] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fccf9d66ee354662819af4485e6aa3afa8835ea8d3024c2ec61df947c891960c" host="localhost" Mar 3 14:03:16.595967 containerd[1576]: 2026-03-03 14:03:16.339 [INFO][4023] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.fccf9d66ee354662819af4485e6aa3afa8835ea8d3024c2ec61df947c891960c Mar 3 14:03:16.595967 containerd[1576]: 2026-03-03 14:03:16.359 [INFO][4023] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fccf9d66ee354662819af4485e6aa3afa8835ea8d3024c2ec61df947c891960c" host="localhost" Mar 3 14:03:16.595967 containerd[1576]: 2026-03-03 14:03:16.380 [INFO][4023] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.fccf9d66ee354662819af4485e6aa3afa8835ea8d3024c2ec61df947c891960c" host="localhost" Mar 3 14:03:16.595967 containerd[1576]: 2026-03-03 14:03:16.381 [INFO][4023] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.fccf9d66ee354662819af4485e6aa3afa8835ea8d3024c2ec61df947c891960c" host="localhost" Mar 3 14:03:16.595967 containerd[1576]: 2026-03-03 14:03:16.381 [INFO][4023] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 14:03:16.595967 containerd[1576]: 2026-03-03 14:03:16.381 [INFO][4023] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="fccf9d66ee354662819af4485e6aa3afa8835ea8d3024c2ec61df947c891960c" HandleID="k8s-pod-network.fccf9d66ee354662819af4485e6aa3afa8835ea8d3024c2ec61df947c891960c" Workload="localhost-k8s-whisker--7dd7567d64--pnd7p-eth0" Mar 3 14:03:16.596086 containerd[1576]: 2026-03-03 14:03:16.400 [INFO][4005] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fccf9d66ee354662819af4485e6aa3afa8835ea8d3024c2ec61df947c891960c" Namespace="calico-system" Pod="whisker-7dd7567d64-pnd7p" WorkloadEndpoint="localhost-k8s-whisker--7dd7567d64--pnd7p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7dd7567d64--pnd7p-eth0", GenerateName:"whisker-7dd7567d64-", Namespace:"calico-system", SelfLink:"", UID:"4040d18c-265e-4c10-b9b5-120a14100c38", ResourceVersion:"1117", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 14, 3, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7dd7567d64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7dd7567d64-pnd7p", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4a9adcb54c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 14:03:16.596086 containerd[1576]: 2026-03-03 14:03:16.400 [INFO][4005] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="fccf9d66ee354662819af4485e6aa3afa8835ea8d3024c2ec61df947c891960c" Namespace="calico-system" Pod="whisker-7dd7567d64-pnd7p" WorkloadEndpoint="localhost-k8s-whisker--7dd7567d64--pnd7p-eth0" Mar 3 14:03:16.596973 containerd[1576]: 2026-03-03 14:03:16.400 [INFO][4005] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4a9adcb54c0 ContainerID="fccf9d66ee354662819af4485e6aa3afa8835ea8d3024c2ec61df947c891960c" Namespace="calico-system" Pod="whisker-7dd7567d64-pnd7p" WorkloadEndpoint="localhost-k8s-whisker--7dd7567d64--pnd7p-eth0" Mar 3 14:03:16.596973 containerd[1576]: 2026-03-03 14:03:16.537 [INFO][4005] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fccf9d66ee354662819af4485e6aa3afa8835ea8d3024c2ec61df947c891960c" Namespace="calico-system" Pod="whisker-7dd7567d64-pnd7p" WorkloadEndpoint="localhost-k8s-whisker--7dd7567d64--pnd7p-eth0" Mar 3 14:03:16.597030 containerd[1576]: 2026-03-03 14:03:16.540 [INFO][4005] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fccf9d66ee354662819af4485e6aa3afa8835ea8d3024c2ec61df947c891960c" Namespace="calico-system" Pod="whisker-7dd7567d64-pnd7p" WorkloadEndpoint="localhost-k8s-whisker--7dd7567d64--pnd7p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7dd7567d64--pnd7p-eth0", GenerateName:"whisker-7dd7567d64-", Namespace:"calico-system", SelfLink:"", UID:"4040d18c-265e-4c10-b9b5-120a14100c38", ResourceVersion:"1117", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 14, 3, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7dd7567d64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fccf9d66ee354662819af4485e6aa3afa8835ea8d3024c2ec61df947c891960c", Pod:"whisker-7dd7567d64-pnd7p", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4a9adcb54c0", MAC:"66:73:f8:10:2e:44", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 14:03:16.597431 containerd[1576]: 2026-03-03 14:03:16.581 [INFO][4005] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fccf9d66ee354662819af4485e6aa3afa8835ea8d3024c2ec61df947c891960c" Namespace="calico-system" Pod="whisker-7dd7567d64-pnd7p" WorkloadEndpoint="localhost-k8s-whisker--7dd7567d64--pnd7p-eth0" Mar 3 14:03:16.795121 containerd[1576]: time="2026-03-03T14:03:16.793458073Z" level=info msg="connecting to shim fccf9d66ee354662819af4485e6aa3afa8835ea8d3024c2ec61df947c891960c" address="unix:///run/containerd/s/6d98233bb67de240f907f5b5c62d79413eb6b017b69ed2db79723ff9a4aec023" namespace=k8s.io protocol=ttrpc version=3 Mar 3 14:03:17.005935 systemd[1]: Started cri-containerd-fccf9d66ee354662819af4485e6aa3afa8835ea8d3024c2ec61df947c891960c.scope - libcontainer container fccf9d66ee354662819af4485e6aa3afa8835ea8d3024c2ec61df947c891960c. Mar 3 14:03:17.093067 systemd-resolved[1395]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 3 14:03:17.379215 containerd[1576]: time="2026-03-03T14:03:17.379095916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7dd7567d64-pnd7p,Uid:4040d18c-265e-4c10-b9b5-120a14100c38,Namespace:calico-system,Attempt:0,} returns sandbox id \"fccf9d66ee354662819af4485e6aa3afa8835ea8d3024c2ec61df947c891960c\"" Mar 3 14:03:17.439844 containerd[1576]: time="2026-03-03T14:03:17.438867894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 3 14:03:17.660839 systemd-networkd[1477]: cali4a9adcb54c0: Gained IPv6LL Mar 3 14:03:19.836893 containerd[1576]: time="2026-03-03T14:03:19.835472523Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:03:19.840181 containerd[1576]: time="2026-03-03T14:03:19.840135524Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 3 14:03:19.856231 containerd[1576]: time="2026-03-03T14:03:19.849981418Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:03:19.869371 containerd[1576]: time="2026-03-03T14:03:19.868205713Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:03:19.872157 containerd[1576]: time="2026-03-03T14:03:19.872112447Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 2.433060829s" Mar 3 14:03:19.873023 containerd[1576]: time="2026-03-03T14:03:19.872295322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 3 14:03:19.967170 containerd[1576]: time="2026-03-03T14:03:19.967112464Z" level=info msg="CreateContainer within sandbox \"fccf9d66ee354662819af4485e6aa3afa8835ea8d3024c2ec61df947c891960c\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 3 14:03:20.089328 containerd[1576]: time="2026-03-03T14:03:20.089212219Z" level=info msg="Container 87a81ed68c689292d8b2498c96ea4e659dfb73e741630da9d914da27aa3e31b9: CDI devices from CRI Config.CDIDevices: []" Mar 3 14:03:20.191169 containerd[1576]: time="2026-03-03T14:03:20.191063665Z" level=info msg="CreateContainer within sandbox \"fccf9d66ee354662819af4485e6aa3afa8835ea8d3024c2ec61df947c891960c\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"87a81ed68c689292d8b2498c96ea4e659dfb73e741630da9d914da27aa3e31b9\"" Mar 3 14:03:20.202550 containerd[1576]: time="2026-03-03T14:03:20.202496452Z" level=info msg="StartContainer for \"87a81ed68c689292d8b2498c96ea4e659dfb73e741630da9d914da27aa3e31b9\"" Mar 3 14:03:20.219249 containerd[1576]: time="2026-03-03T14:03:20.219202303Z" level=info msg="connecting to shim 87a81ed68c689292d8b2498c96ea4e659dfb73e741630da9d914da27aa3e31b9" address="unix:///run/containerd/s/6d98233bb67de240f907f5b5c62d79413eb6b017b69ed2db79723ff9a4aec023" protocol=ttrpc version=3 Mar 3 14:03:20.513299 systemd[1]: Started cri-containerd-87a81ed68c689292d8b2498c96ea4e659dfb73e741630da9d914da27aa3e31b9.scope - libcontainer container 87a81ed68c689292d8b2498c96ea4e659dfb73e741630da9d914da27aa3e31b9. Mar 3 14:03:20.916938 containerd[1576]: time="2026-03-03T14:03:20.916160290Z" level=info msg="StartContainer for \"87a81ed68c689292d8b2498c96ea4e659dfb73e741630da9d914da27aa3e31b9\" returns successfully" Mar 3 14:03:20.938107 containerd[1576]: time="2026-03-03T14:03:20.937912194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 3 14:03:22.043852 systemd-networkd[1477]: vxlan.calico: Link UP Mar 3 14:03:22.044381 systemd-networkd[1477]: vxlan.calico: Gained carrier Mar 3 14:03:22.224566 containerd[1576]: time="2026-03-03T14:03:22.221217643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xk9vk,Uid:c228fc2b-0000-4d3b-b679-8086e76c78a4,Namespace:calico-system,Attempt:0,}" Mar 3 14:03:23.213936 containerd[1576]: time="2026-03-03T14:03:23.213745040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6677b978bd-gbpdl,Uid:c5130667-7949-42cd-8cf1-169b3aece1e9,Namespace:calico-system,Attempt:0,}" Mar 3 14:03:23.635992 systemd-networkd[1477]: calicf4549aae9b: Link UP Mar 3 14:03:23.666213 systemd-networkd[1477]: calicf4549aae9b: Gained carrier Mar 3 14:03:23.674263 systemd-networkd[1477]: vxlan.calico: Gained IPv6LL Mar 3 14:03:23.833447 containerd[1576]: 2026-03-03 14:03:22.811 [INFO][4307] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--xk9vk-eth0 csi-node-driver- calico-system c228fc2b-0000-4d3b-b679-8086e76c78a4 819 0 2026-03-03 14:02:12 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-xk9vk eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calicf4549aae9b [] [] }} ContainerID="8e45038501f036d0e5fd337a40abe4df6922ab7dbbdbce8fb418f0269ba26ce5" Namespace="calico-system" Pod="csi-node-driver-xk9vk" WorkloadEndpoint="localhost-k8s-csi--node--driver--xk9vk-" Mar 3 14:03:23.833447 containerd[1576]: 2026-03-03 14:03:22.816 [INFO][4307] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8e45038501f036d0e5fd337a40abe4df6922ab7dbbdbce8fb418f0269ba26ce5" Namespace="calico-system" Pod="csi-node-driver-xk9vk" WorkloadEndpoint="localhost-k8s-csi--node--driver--xk9vk-eth0" Mar 3 14:03:23.833447 containerd[1576]: 2026-03-03 14:03:23.082 [INFO][4340] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8e45038501f036d0e5fd337a40abe4df6922ab7dbbdbce8fb418f0269ba26ce5" HandleID="k8s-pod-network.8e45038501f036d0e5fd337a40abe4df6922ab7dbbdbce8fb418f0269ba26ce5" Workload="localhost-k8s-csi--node--driver--xk9vk-eth0" Mar 3 14:03:23.834471 containerd[1576]: 2026-03-03 14:03:23.139 [INFO][4340] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8e45038501f036d0e5fd337a40abe4df6922ab7dbbdbce8fb418f0269ba26ce5" HandleID="k8s-pod-network.8e45038501f036d0e5fd337a40abe4df6922ab7dbbdbce8fb418f0269ba26ce5" Workload="localhost-k8s-csi--node--driver--xk9vk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039e8a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-xk9vk", "timestamp":"2026-03-03 14:03:23.082384971 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002291e0)} Mar 3 14:03:23.834471 containerd[1576]: 2026-03-03 14:03:23.146 [INFO][4340] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 14:03:23.834471 containerd[1576]: 2026-03-03 14:03:23.148 [INFO][4340] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 14:03:23.834471 containerd[1576]: 2026-03-03 14:03:23.148 [INFO][4340] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 3 14:03:23.834471 containerd[1576]: 2026-03-03 14:03:23.191 [INFO][4340] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8e45038501f036d0e5fd337a40abe4df6922ab7dbbdbce8fb418f0269ba26ce5" host="localhost" Mar 3 14:03:23.834471 containerd[1576]: 2026-03-03 14:03:23.285 [INFO][4340] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 3 14:03:23.834471 containerd[1576]: 2026-03-03 14:03:23.325 [INFO][4340] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 3 14:03:23.834471 containerd[1576]: 2026-03-03 14:03:23.369 [INFO][4340] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 3 14:03:23.834471 containerd[1576]: 2026-03-03 14:03:23.401 [INFO][4340] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 3 14:03:23.834471 containerd[1576]: 2026-03-03 14:03:23.405 [INFO][4340] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8e45038501f036d0e5fd337a40abe4df6922ab7dbbdbce8fb418f0269ba26ce5" host="localhost" Mar 3 14:03:23.839849 containerd[1576]: 2026-03-03 14:03:23.438 [INFO][4340] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8e45038501f036d0e5fd337a40abe4df6922ab7dbbdbce8fb418f0269ba26ce5 Mar 3 14:03:23.839849 containerd[1576]: 2026-03-03 14:03:23.476 [INFO][4340] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8e45038501f036d0e5fd337a40abe4df6922ab7dbbdbce8fb418f0269ba26ce5" host="localhost" Mar 3 14:03:23.839849 containerd[1576]: 2026-03-03 14:03:23.583 [INFO][4340] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.8e45038501f036d0e5fd337a40abe4df6922ab7dbbdbce8fb418f0269ba26ce5" host="localhost" Mar 3 14:03:23.839849 containerd[1576]: 2026-03-03 14:03:23.588 [INFO][4340] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.8e45038501f036d0e5fd337a40abe4df6922ab7dbbdbce8fb418f0269ba26ce5" host="localhost" Mar 3 14:03:23.839849 containerd[1576]: 2026-03-03 14:03:23.589 [INFO][4340] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 14:03:23.839849 containerd[1576]: 2026-03-03 14:03:23.591 [INFO][4340] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="8e45038501f036d0e5fd337a40abe4df6922ab7dbbdbce8fb418f0269ba26ce5" HandleID="k8s-pod-network.8e45038501f036d0e5fd337a40abe4df6922ab7dbbdbce8fb418f0269ba26ce5" Workload="localhost-k8s-csi--node--driver--xk9vk-eth0" Mar 3 14:03:23.840110 containerd[1576]: 2026-03-03 14:03:23.619 [INFO][4307] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8e45038501f036d0e5fd337a40abe4df6922ab7dbbdbce8fb418f0269ba26ce5" Namespace="calico-system" Pod="csi-node-driver-xk9vk" WorkloadEndpoint="localhost-k8s-csi--node--driver--xk9vk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xk9vk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c228fc2b-0000-4d3b-b679-8086e76c78a4", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 14, 2, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-xk9vk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicf4549aae9b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 14:03:23.840376 containerd[1576]: 2026-03-03 14:03:23.624 [INFO][4307] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="8e45038501f036d0e5fd337a40abe4df6922ab7dbbdbce8fb418f0269ba26ce5" Namespace="calico-system" Pod="csi-node-driver-xk9vk" WorkloadEndpoint="localhost-k8s-csi--node--driver--xk9vk-eth0" Mar 3 14:03:23.840376 containerd[1576]: 2026-03-03 14:03:23.624 [INFO][4307] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicf4549aae9b ContainerID="8e45038501f036d0e5fd337a40abe4df6922ab7dbbdbce8fb418f0269ba26ce5" Namespace="calico-system" Pod="csi-node-driver-xk9vk" WorkloadEndpoint="localhost-k8s-csi--node--driver--xk9vk-eth0" Mar 3 14:03:23.840376 containerd[1576]: 2026-03-03 14:03:23.727 [INFO][4307] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8e45038501f036d0e5fd337a40abe4df6922ab7dbbdbce8fb418f0269ba26ce5" Namespace="calico-system" Pod="csi-node-driver-xk9vk" WorkloadEndpoint="localhost-k8s-csi--node--driver--xk9vk-eth0" Mar 3 14:03:23.840442 containerd[1576]: 2026-03-03 14:03:23.739 [INFO][4307] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8e45038501f036d0e5fd337a40abe4df6922ab7dbbdbce8fb418f0269ba26ce5" Namespace="calico-system" Pod="csi-node-driver-xk9vk" WorkloadEndpoint="localhost-k8s-csi--node--driver--xk9vk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xk9vk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c228fc2b-0000-4d3b-b679-8086e76c78a4", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 14, 2, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8e45038501f036d0e5fd337a40abe4df6922ab7dbbdbce8fb418f0269ba26ce5", Pod:"csi-node-driver-xk9vk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicf4549aae9b", MAC:"52:50:21:c6:bd:e7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 14:03:23.850847 containerd[1576]: 2026-03-03 14:03:23.804 [INFO][4307] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8e45038501f036d0e5fd337a40abe4df6922ab7dbbdbce8fb418f0269ba26ce5" Namespace="calico-system" Pod="csi-node-driver-xk9vk" WorkloadEndpoint="localhost-k8s-csi--node--driver--xk9vk-eth0" Mar 3 14:03:23.999971 containerd[1576]: time="2026-03-03T14:03:23.999080590Z" level=info msg="connecting to shim 8e45038501f036d0e5fd337a40abe4df6922ab7dbbdbce8fb418f0269ba26ce5" address="unix:///run/containerd/s/12d79802b83b36f25f3eaf6690da5a5c246db5c69fffc403bbc145fdd922b0a4" namespace=k8s.io protocol=ttrpc version=3 Mar 3 14:03:24.265305 containerd[1576]: time="2026-03-03T14:03:24.261409392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6df4cc67f5-nnxg6,Uid:6516ca51-7391-44dd-b25b-3ff46412e8d5,Namespace:calico-system,Attempt:0,}" Mar 3 14:03:24.268936 kubelet[2855]: E0303 14:03:24.268900 2855 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:03:24.279041 containerd[1576]: time="2026-03-03T14:03:24.278990020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-vvdpm,Uid:39dff308-ff46-444b-bddd-0b45b42e0715,Namespace:kube-system,Attempt:0,}" Mar 3 14:03:24.283297 systemd[1]: Started cri-containerd-8e45038501f036d0e5fd337a40abe4df6922ab7dbbdbce8fb418f0269ba26ce5.scope - libcontainer container 8e45038501f036d0e5fd337a40abe4df6922ab7dbbdbce8fb418f0269ba26ce5. Mar 3 14:03:24.644045 systemd-resolved[1395]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 3 14:03:24.736489 systemd-networkd[1477]: cali2f9c18e27bf: Link UP Mar 3 14:03:24.742896 systemd-networkd[1477]: cali2f9c18e27bf: Gained carrier Mar 3 14:03:24.910261 containerd[1576]: 2026-03-03 14:03:23.785 [INFO][4349] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6677b978bd--gbpdl-eth0 calico-apiserver-6677b978bd- calico-system c5130667-7949-42cd-8cf1-169b3aece1e9 1050 0 2026-03-03 14:02:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6677b978bd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6677b978bd-gbpdl eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali2f9c18e27bf [] [] }} ContainerID="e1d0b6ceef07e15693f678c496d60694d7ae9b83ea40e1c6d509aa44648e5678" Namespace="calico-system" Pod="calico-apiserver-6677b978bd-gbpdl" WorkloadEndpoint="localhost-k8s-calico--apiserver--6677b978bd--gbpdl-" Mar 3 14:03:24.910261 containerd[1576]: 2026-03-03 14:03:23.790 [INFO][4349] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e1d0b6ceef07e15693f678c496d60694d7ae9b83ea40e1c6d509aa44648e5678" Namespace="calico-system" Pod="calico-apiserver-6677b978bd-gbpdl" WorkloadEndpoint="localhost-k8s-calico--apiserver--6677b978bd--gbpdl-eth0" Mar 3 14:03:24.910261 containerd[1576]: 2026-03-03 14:03:24.075 [INFO][4371] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e1d0b6ceef07e15693f678c496d60694d7ae9b83ea40e1c6d509aa44648e5678" HandleID="k8s-pod-network.e1d0b6ceef07e15693f678c496d60694d7ae9b83ea40e1c6d509aa44648e5678" Workload="localhost-k8s-calico--apiserver--6677b978bd--gbpdl-eth0" Mar 3 14:03:24.911539 containerd[1576]: 2026-03-03 14:03:24.120 [INFO][4371] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e1d0b6ceef07e15693f678c496d60694d7ae9b83ea40e1c6d509aa44648e5678" HandleID="k8s-pod-network.e1d0b6ceef07e15693f678c496d60694d7ae9b83ea40e1c6d509aa44648e5678" Workload="localhost-k8s-calico--apiserver--6677b978bd--gbpdl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000425ba0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-6677b978bd-gbpdl", "timestamp":"2026-03-03 14:03:24.075026892 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004269a0)} Mar 3 14:03:24.911539 containerd[1576]: 2026-03-03 14:03:24.121 [INFO][4371] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 14:03:24.911539 containerd[1576]: 2026-03-03 14:03:24.121 [INFO][4371] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 14:03:24.911539 containerd[1576]: 2026-03-03 14:03:24.122 [INFO][4371] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 3 14:03:24.911539 containerd[1576]: 2026-03-03 14:03:24.140 [INFO][4371] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e1d0b6ceef07e15693f678c496d60694d7ae9b83ea40e1c6d509aa44648e5678" host="localhost" Mar 3 14:03:24.911539 containerd[1576]: 2026-03-03 14:03:24.197 [INFO][4371] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 3 14:03:24.911539 containerd[1576]: 2026-03-03 14:03:24.259 [INFO][4371] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 3 14:03:24.911539 containerd[1576]: 2026-03-03 14:03:24.346 [INFO][4371] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 3 14:03:24.911539 containerd[1576]: 2026-03-03 14:03:24.393 [INFO][4371] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 3 14:03:24.911539 containerd[1576]: 2026-03-03 14:03:24.393 [INFO][4371] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e1d0b6ceef07e15693f678c496d60694d7ae9b83ea40e1c6d509aa44648e5678" host="localhost" Mar 3 14:03:24.916850 containerd[1576]: 2026-03-03 14:03:24.428 [INFO][4371] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e1d0b6ceef07e15693f678c496d60694d7ae9b83ea40e1c6d509aa44648e5678 Mar 3 14:03:24.916850 containerd[1576]: 2026-03-03 14:03:24.484 [INFO][4371] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e1d0b6ceef07e15693f678c496d60694d7ae9b83ea40e1c6d509aa44648e5678" host="localhost" Mar 3 14:03:24.916850 containerd[1576]: 2026-03-03 14:03:24.545 [INFO][4371] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.e1d0b6ceef07e15693f678c496d60694d7ae9b83ea40e1c6d509aa44648e5678" host="localhost" Mar 3 14:03:24.916850 containerd[1576]: 2026-03-03 14:03:24.546 [INFO][4371] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.e1d0b6ceef07e15693f678c496d60694d7ae9b83ea40e1c6d509aa44648e5678" host="localhost" Mar 3 14:03:24.916850 containerd[1576]: 2026-03-03 14:03:24.546 [INFO][4371] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 14:03:24.916850 containerd[1576]: 2026-03-03 14:03:24.546 [INFO][4371] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="e1d0b6ceef07e15693f678c496d60694d7ae9b83ea40e1c6d509aa44648e5678" HandleID="k8s-pod-network.e1d0b6ceef07e15693f678c496d60694d7ae9b83ea40e1c6d509aa44648e5678" Workload="localhost-k8s-calico--apiserver--6677b978bd--gbpdl-eth0" Mar 3 14:03:24.917031 containerd[1576]: 2026-03-03 14:03:24.611 [INFO][4349] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e1d0b6ceef07e15693f678c496d60694d7ae9b83ea40e1c6d509aa44648e5678" Namespace="calico-system" Pod="calico-apiserver-6677b978bd-gbpdl" WorkloadEndpoint="localhost-k8s-calico--apiserver--6677b978bd--gbpdl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6677b978bd--gbpdl-eth0", GenerateName:"calico-apiserver-6677b978bd-", Namespace:"calico-system", SelfLink:"", UID:"c5130667-7949-42cd-8cf1-169b3aece1e9", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 14, 2, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6677b978bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6677b978bd-gbpdl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali2f9c18e27bf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 14:03:24.917443 containerd[1576]: 2026-03-03 14:03:24.632 [INFO][4349] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="e1d0b6ceef07e15693f678c496d60694d7ae9b83ea40e1c6d509aa44648e5678" Namespace="calico-system" Pod="calico-apiserver-6677b978bd-gbpdl" WorkloadEndpoint="localhost-k8s-calico--apiserver--6677b978bd--gbpdl-eth0" Mar 3 14:03:24.917443 containerd[1576]: 2026-03-03 14:03:24.632 [INFO][4349] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2f9c18e27bf ContainerID="e1d0b6ceef07e15693f678c496d60694d7ae9b83ea40e1c6d509aa44648e5678" Namespace="calico-system" Pod="calico-apiserver-6677b978bd-gbpdl" WorkloadEndpoint="localhost-k8s-calico--apiserver--6677b978bd--gbpdl-eth0" Mar 3 14:03:24.917443 containerd[1576]: 2026-03-03 14:03:24.752 [INFO][4349] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e1d0b6ceef07e15693f678c496d60694d7ae9b83ea40e1c6d509aa44648e5678" Namespace="calico-system" Pod="calico-apiserver-6677b978bd-gbpdl" WorkloadEndpoint="localhost-k8s-calico--apiserver--6677b978bd--gbpdl-eth0" Mar 3 14:03:24.917863 containerd[1576]: 2026-03-03 14:03:24.756 [INFO][4349] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e1d0b6ceef07e15693f678c496d60694d7ae9b83ea40e1c6d509aa44648e5678" Namespace="calico-system" Pod="calico-apiserver-6677b978bd-gbpdl" WorkloadEndpoint="localhost-k8s-calico--apiserver--6677b978bd--gbpdl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6677b978bd--gbpdl-eth0", GenerateName:"calico-apiserver-6677b978bd-", Namespace:"calico-system", SelfLink:"", UID:"c5130667-7949-42cd-8cf1-169b3aece1e9", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 14, 2, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6677b978bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e1d0b6ceef07e15693f678c496d60694d7ae9b83ea40e1c6d509aa44648e5678", Pod:"calico-apiserver-6677b978bd-gbpdl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali2f9c18e27bf", MAC:"2a:00:9b:55:66:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 14:03:24.918150 containerd[1576]: 2026-03-03 14:03:24.809 [INFO][4349] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e1d0b6ceef07e15693f678c496d60694d7ae9b83ea40e1c6d509aa44648e5678" Namespace="calico-system" Pod="calico-apiserver-6677b978bd-gbpdl" WorkloadEndpoint="localhost-k8s-calico--apiserver--6677b978bd--gbpdl-eth0" Mar 3 14:03:24.957163 systemd-networkd[1477]: calicf4549aae9b: Gained IPv6LL Mar 3 14:03:25.201522 containerd[1576]: time="2026-03-03T14:03:25.198426264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6677b978bd-vb4zq,Uid:61665e7b-9fb3-4659-b57b-d6e2b5ad54ac,Namespace:calico-system,Attempt:0,}" Mar 3 14:03:25.244083 containerd[1576]: time="2026-03-03T14:03:25.242323022Z" level=info msg="connecting to shim e1d0b6ceef07e15693f678c496d60694d7ae9b83ea40e1c6d509aa44648e5678" address="unix:///run/containerd/s/f38e8ac46e7bdaa24ef382cd64e42bba692cd50b89a44aaf5555a684bf50cd13" namespace=k8s.io protocol=ttrpc version=3 Mar 3 14:03:25.277351 containerd[1576]: time="2026-03-03T14:03:25.276963367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xk9vk,Uid:c228fc2b-0000-4d3b-b679-8086e76c78a4,Namespace:calico-system,Attempt:0,} returns sandbox id \"8e45038501f036d0e5fd337a40abe4df6922ab7dbbdbce8fb418f0269ba26ce5\"" Mar 3 14:03:25.702026 systemd[1]: Started cri-containerd-e1d0b6ceef07e15693f678c496d60694d7ae9b83ea40e1c6d509aa44648e5678.scope - libcontainer container e1d0b6ceef07e15693f678c496d60694d7ae9b83ea40e1c6d509aa44648e5678. Mar 3 14:03:25.913510 systemd-resolved[1395]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 3 14:03:25.925986 systemd-networkd[1477]: caliedf6700cdd1: Link UP Mar 3 14:03:25.955302 systemd-networkd[1477]: caliedf6700cdd1: Gained carrier Mar 3 14:03:26.043900 systemd-networkd[1477]: cali2f9c18e27bf: Gained IPv6LL Mar 3 14:03:26.070911 containerd[1576]: 2026-03-03 14:03:25.070 [INFO][4424] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6df4cc67f5--nnxg6-eth0 calico-kube-controllers-6df4cc67f5- calico-system 6516ca51-7391-44dd-b25b-3ff46412e8d5 1058 0 2026-03-03 14:02:12 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6df4cc67f5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6df4cc67f5-nnxg6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliedf6700cdd1 [] [] }} ContainerID="d44924dac5caadc6c6722935184610d5f334923ecb854d7146a1dc931604c8ac" Namespace="calico-system" Pod="calico-kube-controllers-6df4cc67f5-nnxg6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6df4cc67f5--nnxg6-" Mar 3 14:03:26.070911 containerd[1576]: 2026-03-03 14:03:25.084 [INFO][4424] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d44924dac5caadc6c6722935184610d5f334923ecb854d7146a1dc931604c8ac" Namespace="calico-system" Pod="calico-kube-controllers-6df4cc67f5-nnxg6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6df4cc67f5--nnxg6-eth0" Mar 3 14:03:26.070911 containerd[1576]: 2026-03-03 14:03:25.478 [INFO][4498] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d44924dac5caadc6c6722935184610d5f334923ecb854d7146a1dc931604c8ac" HandleID="k8s-pod-network.d44924dac5caadc6c6722935184610d5f334923ecb854d7146a1dc931604c8ac" Workload="localhost-k8s-calico--kube--controllers--6df4cc67f5--nnxg6-eth0" Mar 3 14:03:26.073180 containerd[1576]: 2026-03-03 14:03:25.544 [INFO][4498] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d44924dac5caadc6c6722935184610d5f334923ecb854d7146a1dc931604c8ac" HandleID="k8s-pod-network.d44924dac5caadc6c6722935184610d5f334923ecb854d7146a1dc931604c8ac" Workload="localhost-k8s-calico--kube--controllers--6df4cc67f5--nnxg6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00048d3c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6df4cc67f5-nnxg6", "timestamp":"2026-03-03 14:03:25.478009629 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001422c0)} Mar 3 14:03:26.073180 containerd[1576]: 2026-03-03 14:03:25.550 [INFO][4498] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 14:03:26.073180 containerd[1576]: 2026-03-03 14:03:25.550 [INFO][4498] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 14:03:26.073180 containerd[1576]: 2026-03-03 14:03:25.550 [INFO][4498] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 3 14:03:26.073180 containerd[1576]: 2026-03-03 14:03:25.591 [INFO][4498] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d44924dac5caadc6c6722935184610d5f334923ecb854d7146a1dc931604c8ac" host="localhost" Mar 3 14:03:26.073180 containerd[1576]: 2026-03-03 14:03:25.613 [INFO][4498] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 3 14:03:26.073180 containerd[1576]: 2026-03-03 14:03:25.659 [INFO][4498] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 3 14:03:26.073180 containerd[1576]: 2026-03-03 14:03:25.669 [INFO][4498] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 3 14:03:26.073180 containerd[1576]: 2026-03-03 14:03:25.693 [INFO][4498] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 3 14:03:26.073867 containerd[1576]: 2026-03-03 14:03:25.693 [INFO][4498] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d44924dac5caadc6c6722935184610d5f334923ecb854d7146a1dc931604c8ac" host="localhost" Mar 3 14:03:26.073867 containerd[1576]: 2026-03-03 14:03:25.708 [INFO][4498] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d44924dac5caadc6c6722935184610d5f334923ecb854d7146a1dc931604c8ac Mar 3 14:03:26.073867 containerd[1576]: 2026-03-03 14:03:25.748 [INFO][4498] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d44924dac5caadc6c6722935184610d5f334923ecb854d7146a1dc931604c8ac" host="localhost" Mar 3 14:03:26.073867 containerd[1576]: 2026-03-03 14:03:25.782 [INFO][4498] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.d44924dac5caadc6c6722935184610d5f334923ecb854d7146a1dc931604c8ac" host="localhost" Mar 3 14:03:26.073867 containerd[1576]: 2026-03-03 14:03:25.782 [INFO][4498] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.d44924dac5caadc6c6722935184610d5f334923ecb854d7146a1dc931604c8ac" host="localhost" Mar 3 14:03:26.073867 containerd[1576]: 2026-03-03 14:03:25.783 [INFO][4498] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 14:03:26.073867 containerd[1576]: 2026-03-03 14:03:25.783 [INFO][4498] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="d44924dac5caadc6c6722935184610d5f334923ecb854d7146a1dc931604c8ac" HandleID="k8s-pod-network.d44924dac5caadc6c6722935184610d5f334923ecb854d7146a1dc931604c8ac" Workload="localhost-k8s-calico--kube--controllers--6df4cc67f5--nnxg6-eth0" Mar 3 14:03:26.074148 containerd[1576]: 2026-03-03 14:03:25.852 [INFO][4424] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d44924dac5caadc6c6722935184610d5f334923ecb854d7146a1dc931604c8ac" Namespace="calico-system" Pod="calico-kube-controllers-6df4cc67f5-nnxg6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6df4cc67f5--nnxg6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6df4cc67f5--nnxg6-eth0", GenerateName:"calico-kube-controllers-6df4cc67f5-", Namespace:"calico-system", SelfLink:"", UID:"6516ca51-7391-44dd-b25b-3ff46412e8d5", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 14, 2, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6df4cc67f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6df4cc67f5-nnxg6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliedf6700cdd1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 14:03:26.074410 containerd[1576]: 2026-03-03 14:03:25.852 [INFO][4424] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="d44924dac5caadc6c6722935184610d5f334923ecb854d7146a1dc931604c8ac" Namespace="calico-system" Pod="calico-kube-controllers-6df4cc67f5-nnxg6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6df4cc67f5--nnxg6-eth0" Mar 3 14:03:26.074410 containerd[1576]: 2026-03-03 14:03:25.852 [INFO][4424] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliedf6700cdd1 ContainerID="d44924dac5caadc6c6722935184610d5f334923ecb854d7146a1dc931604c8ac" Namespace="calico-system" Pod="calico-kube-controllers-6df4cc67f5-nnxg6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6df4cc67f5--nnxg6-eth0" Mar 3 14:03:26.074410 containerd[1576]: 2026-03-03 14:03:25.970 [INFO][4424] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d44924dac5caadc6c6722935184610d5f334923ecb854d7146a1dc931604c8ac" Namespace="calico-system" Pod="calico-kube-controllers-6df4cc67f5-nnxg6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6df4cc67f5--nnxg6-eth0" Mar 3 14:03:26.078874 containerd[1576]: 2026-03-03 14:03:25.974 [INFO][4424] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d44924dac5caadc6c6722935184610d5f334923ecb854d7146a1dc931604c8ac" Namespace="calico-system" Pod="calico-kube-controllers-6df4cc67f5-nnxg6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6df4cc67f5--nnxg6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6df4cc67f5--nnxg6-eth0", GenerateName:"calico-kube-controllers-6df4cc67f5-", Namespace:"calico-system", SelfLink:"", UID:"6516ca51-7391-44dd-b25b-3ff46412e8d5", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 14, 2, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6df4cc67f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d44924dac5caadc6c6722935184610d5f334923ecb854d7146a1dc931604c8ac", Pod:"calico-kube-controllers-6df4cc67f5-nnxg6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliedf6700cdd1", MAC:"5a:7c:06:43:b2:c1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 14:03:26.079339 containerd[1576]: 2026-03-03 14:03:26.013 [INFO][4424] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d44924dac5caadc6c6722935184610d5f334923ecb854d7146a1dc931604c8ac" Namespace="calico-system" Pod="calico-kube-controllers-6df4cc67f5-nnxg6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6df4cc67f5--nnxg6-eth0" Mar 3 14:03:26.203095 containerd[1576]: time="2026-03-03T14:03:26.202384893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-wxxg6,Uid:2c967439-ed8f-40a6-ac45-3c8bab198902,Namespace:calico-system,Attempt:0,}" Mar 3 14:03:26.204856 systemd-networkd[1477]: cali22a70c96c29: Link UP Mar 3 14:03:26.213848 systemd-networkd[1477]: cali22a70c96c29: Gained carrier Mar 3 14:03:26.342445 containerd[1576]: 2026-03-03 14:03:25.221 [INFO][4432] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--vvdpm-eth0 coredns-7d764666f9- kube-system 39dff308-ff46-444b-bddd-0b45b42e0715 1061 0 2026-03-03 14:01:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-vvdpm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali22a70c96c29 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="e2d33dad39993a76e9bc0cf082475e6ff252964579095b4b60edc27e7752a578" Namespace="kube-system" Pod="coredns-7d764666f9-vvdpm" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--vvdpm-" Mar 3 14:03:26.342445 containerd[1576]: 2026-03-03 14:03:25.225 [INFO][4432] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e2d33dad39993a76e9bc0cf082475e6ff252964579095b4b60edc27e7752a578" Namespace="kube-system" Pod="coredns-7d764666f9-vvdpm" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--vvdpm-eth0" Mar 3 14:03:26.342445 containerd[1576]: 2026-03-03 14:03:25.521 [INFO][4515] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e2d33dad39993a76e9bc0cf082475e6ff252964579095b4b60edc27e7752a578" HandleID="k8s-pod-network.e2d33dad39993a76e9bc0cf082475e6ff252964579095b4b60edc27e7752a578" Workload="localhost-k8s-coredns--7d764666f9--vvdpm-eth0" Mar 3 14:03:26.343317 containerd[1576]: 2026-03-03 14:03:25.645 [INFO][4515] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e2d33dad39993a76e9bc0cf082475e6ff252964579095b4b60edc27e7752a578" HandleID="k8s-pod-network.e2d33dad39993a76e9bc0cf082475e6ff252964579095b4b60edc27e7752a578" Workload="localhost-k8s-coredns--7d764666f9--vvdpm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e770), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-vvdpm", "timestamp":"2026-03-03 14:03:25.52154495 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0007b4000)} Mar 3 14:03:26.343317 containerd[1576]: 2026-03-03 14:03:25.646 [INFO][4515] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 14:03:26.343317 containerd[1576]: 2026-03-03 14:03:25.783 [INFO][4515] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 14:03:26.343317 containerd[1576]: 2026-03-03 14:03:25.783 [INFO][4515] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 3 14:03:26.343317 containerd[1576]: 2026-03-03 14:03:25.805 [INFO][4515] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e2d33dad39993a76e9bc0cf082475e6ff252964579095b4b60edc27e7752a578" host="localhost" Mar 3 14:03:26.343317 containerd[1576]: 2026-03-03 14:03:25.855 [INFO][4515] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 3 14:03:26.343317 containerd[1576]: 2026-03-03 14:03:25.898 [INFO][4515] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 3 14:03:26.343317 containerd[1576]: 2026-03-03 14:03:25.920 [INFO][4515] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 3 14:03:26.343317 containerd[1576]: 2026-03-03 14:03:25.992 [INFO][4515] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 3 14:03:26.343317 containerd[1576]: 2026-03-03 14:03:25.992 [INFO][4515] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e2d33dad39993a76e9bc0cf082475e6ff252964579095b4b60edc27e7752a578" host="localhost" Mar 3 14:03:26.345268 containerd[1576]: 2026-03-03 14:03:26.006 [INFO][4515] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e2d33dad39993a76e9bc0cf082475e6ff252964579095b4b60edc27e7752a578 Mar 3 14:03:26.345268 containerd[1576]: 2026-03-03 14:03:26.035 [INFO][4515] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e2d33dad39993a76e9bc0cf082475e6ff252964579095b4b60edc27e7752a578" host="localhost" Mar 3 14:03:26.345268 containerd[1576]: 2026-03-03 14:03:26.112 [INFO][4515] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.e2d33dad39993a76e9bc0cf082475e6ff252964579095b4b60edc27e7752a578" host="localhost" Mar 3 14:03:26.345268 containerd[1576]: 2026-03-03 14:03:26.117 [INFO][4515] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.e2d33dad39993a76e9bc0cf082475e6ff252964579095b4b60edc27e7752a578" host="localhost" Mar 3 14:03:26.345268 containerd[1576]: 2026-03-03 14:03:26.124 [INFO][4515] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 14:03:26.345268 containerd[1576]: 2026-03-03 14:03:26.125 [INFO][4515] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="e2d33dad39993a76e9bc0cf082475e6ff252964579095b4b60edc27e7752a578" HandleID="k8s-pod-network.e2d33dad39993a76e9bc0cf082475e6ff252964579095b4b60edc27e7752a578" Workload="localhost-k8s-coredns--7d764666f9--vvdpm-eth0" Mar 3 14:03:26.345424 containerd[1576]: 2026-03-03 14:03:26.158 [INFO][4432] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e2d33dad39993a76e9bc0cf082475e6ff252964579095b4b60edc27e7752a578" Namespace="kube-system" Pod="coredns-7d764666f9-vvdpm" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--vvdpm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--vvdpm-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"39dff308-ff46-444b-bddd-0b45b42e0715", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 14, 1, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-vvdpm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali22a70c96c29", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 14:03:26.345424 containerd[1576]: 2026-03-03 14:03:26.159 [INFO][4432] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="e2d33dad39993a76e9bc0cf082475e6ff252964579095b4b60edc27e7752a578" Namespace="kube-system" Pod="coredns-7d764666f9-vvdpm" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--vvdpm-eth0" Mar 3 14:03:26.345424 containerd[1576]: 2026-03-03 14:03:26.159 [INFO][4432] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali22a70c96c29 ContainerID="e2d33dad39993a76e9bc0cf082475e6ff252964579095b4b60edc27e7752a578" Namespace="kube-system" Pod="coredns-7d764666f9-vvdpm" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--vvdpm-eth0" Mar 3 14:03:26.345424 containerd[1576]: 2026-03-03 14:03:26.208 [INFO][4432] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e2d33dad39993a76e9bc0cf082475e6ff252964579095b4b60edc27e7752a578" Namespace="kube-system" Pod="coredns-7d764666f9-vvdpm" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--vvdpm-eth0" Mar 3 14:03:26.345424 containerd[1576]: 2026-03-03 14:03:26.229 [INFO][4432] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e2d33dad39993a76e9bc0cf082475e6ff252964579095b4b60edc27e7752a578" Namespace="kube-system" Pod="coredns-7d764666f9-vvdpm" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--vvdpm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--vvdpm-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"39dff308-ff46-444b-bddd-0b45b42e0715", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 14, 1, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e2d33dad39993a76e9bc0cf082475e6ff252964579095b4b60edc27e7752a578", Pod:"coredns-7d764666f9-vvdpm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali22a70c96c29", MAC:"ba:b0:04:51:25:4c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 14:03:26.345424 containerd[1576]: 2026-03-03 14:03:26.276 [INFO][4432] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e2d33dad39993a76e9bc0cf082475e6ff252964579095b4b60edc27e7752a578" Namespace="kube-system" Pod="coredns-7d764666f9-vvdpm" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--vvdpm-eth0" Mar 3 14:03:26.618973 containerd[1576]: time="2026-03-03T14:03:26.618065520Z" level=info msg="connecting to shim d44924dac5caadc6c6722935184610d5f334923ecb854d7146a1dc931604c8ac" address="unix:///run/containerd/s/0364e4d775ca7eb60037ab92ed9459d379cac960f98ed634a341671ca04bbaed" namespace=k8s.io protocol=ttrpc version=3 Mar 3 14:03:26.645524 containerd[1576]: time="2026-03-03T14:03:26.645469158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6677b978bd-gbpdl,Uid:c5130667-7949-42cd-8cf1-169b3aece1e9,Namespace:calico-system,Attempt:0,} returns sandbox id \"e1d0b6ceef07e15693f678c496d60694d7ae9b83ea40e1c6d509aa44648e5678\"" Mar 3 14:03:26.695086 containerd[1576]: time="2026-03-03T14:03:26.695022387Z" level=info msg="connecting to shim e2d33dad39993a76e9bc0cf082475e6ff252964579095b4b60edc27e7752a578" address="unix:///run/containerd/s/665f570a0ba79d56723c7792aabdfb068ebe4c9ae810a47ec7c30055dbcfe530" namespace=k8s.io protocol=ttrpc version=3 Mar 3 14:03:26.722410 systemd-networkd[1477]: cali32ffe152c9e: Link UP Mar 3 14:03:26.730213 systemd-networkd[1477]: cali32ffe152c9e: Gained carrier Mar 3 14:03:26.815247 containerd[1576]: 2026-03-03 14:03:25.760 [INFO][4521] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6677b978bd--vb4zq-eth0 calico-apiserver-6677b978bd- calico-system 61665e7b-9fb3-4659-b57b-d6e2b5ad54ac 1059 0 2026-03-03 14:02:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6677b978bd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6677b978bd-vb4zq eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali32ffe152c9e [] [] }} ContainerID="80cc1a69152219a6585e337216614fe80e750388ee2749c94c1cde06389d5e70" Namespace="calico-system" Pod="calico-apiserver-6677b978bd-vb4zq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6677b978bd--vb4zq-" Mar 3 14:03:26.815247 containerd[1576]: 2026-03-03 14:03:25.771 [INFO][4521] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="80cc1a69152219a6585e337216614fe80e750388ee2749c94c1cde06389d5e70" Namespace="calico-system" Pod="calico-apiserver-6677b978bd-vb4zq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6677b978bd--vb4zq-eth0" Mar 3 14:03:26.815247 containerd[1576]: 2026-03-03 14:03:26.217 [INFO][4573] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="80cc1a69152219a6585e337216614fe80e750388ee2749c94c1cde06389d5e70" HandleID="k8s-pod-network.80cc1a69152219a6585e337216614fe80e750388ee2749c94c1cde06389d5e70" Workload="localhost-k8s-calico--apiserver--6677b978bd--vb4zq-eth0" Mar 3 14:03:26.815247 containerd[1576]: 2026-03-03 14:03:26.274 [INFO][4573] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="80cc1a69152219a6585e337216614fe80e750388ee2749c94c1cde06389d5e70" HandleID="k8s-pod-network.80cc1a69152219a6585e337216614fe80e750388ee2749c94c1cde06389d5e70" Workload="localhost-k8s-calico--apiserver--6677b978bd--vb4zq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000aee30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-6677b978bd-vb4zq", "timestamp":"2026-03-03 14:03:26.217220466 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0005c49a0)} Mar 3 14:03:26.815247 containerd[1576]: 2026-03-03 14:03:26.276 [INFO][4573] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 14:03:26.815247 containerd[1576]: 2026-03-03 14:03:26.276 [INFO][4573] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 14:03:26.815247 containerd[1576]: 2026-03-03 14:03:26.314 [INFO][4573] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 3 14:03:26.815247 containerd[1576]: 2026-03-03 14:03:26.348 [INFO][4573] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.80cc1a69152219a6585e337216614fe80e750388ee2749c94c1cde06389d5e70" host="localhost" Mar 3 14:03:26.815247 containerd[1576]: 2026-03-03 14:03:26.405 [INFO][4573] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 3 14:03:26.815247 containerd[1576]: 2026-03-03 14:03:26.445 [INFO][4573] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 3 14:03:26.815247 containerd[1576]: 2026-03-03 14:03:26.461 [INFO][4573] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 3 14:03:26.815247 containerd[1576]: 2026-03-03 14:03:26.472 [INFO][4573] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 3 14:03:26.815247 containerd[1576]: 2026-03-03 14:03:26.477 [INFO][4573] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.80cc1a69152219a6585e337216614fe80e750388ee2749c94c1cde06389d5e70" host="localhost" Mar 3 14:03:26.815247 containerd[1576]: 2026-03-03 14:03:26.528 [INFO][4573] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.80cc1a69152219a6585e337216614fe80e750388ee2749c94c1cde06389d5e70 Mar 3 14:03:26.815247 containerd[1576]: 2026-03-03 14:03:26.655 [INFO][4573] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.80cc1a69152219a6585e337216614fe80e750388ee2749c94c1cde06389d5e70" host="localhost" Mar 3 14:03:26.815247 containerd[1576]: 2026-03-03 14:03:26.684 [INFO][4573] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.80cc1a69152219a6585e337216614fe80e750388ee2749c94c1cde06389d5e70" host="localhost" Mar 3 14:03:26.815247 containerd[1576]: 2026-03-03 14:03:26.684 [INFO][4573] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.80cc1a69152219a6585e337216614fe80e750388ee2749c94c1cde06389d5e70" host="localhost" Mar 3 14:03:26.815247 containerd[1576]: 2026-03-03 14:03:26.684 [INFO][4573] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 14:03:26.815247 containerd[1576]: 2026-03-03 14:03:26.684 [INFO][4573] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="80cc1a69152219a6585e337216614fe80e750388ee2749c94c1cde06389d5e70" HandleID="k8s-pod-network.80cc1a69152219a6585e337216614fe80e750388ee2749c94c1cde06389d5e70" Workload="localhost-k8s-calico--apiserver--6677b978bd--vb4zq-eth0" Mar 3 14:03:26.825204 containerd[1576]: 2026-03-03 14:03:26.698 [INFO][4521] cni-plugin/k8s.go 418: Populated endpoint ContainerID="80cc1a69152219a6585e337216614fe80e750388ee2749c94c1cde06389d5e70" Namespace="calico-system" Pod="calico-apiserver-6677b978bd-vb4zq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6677b978bd--vb4zq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6677b978bd--vb4zq-eth0", GenerateName:"calico-apiserver-6677b978bd-", Namespace:"calico-system", SelfLink:"", UID:"61665e7b-9fb3-4659-b57b-d6e2b5ad54ac", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 14, 2, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6677b978bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6677b978bd-vb4zq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali32ffe152c9e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 14:03:26.825204 containerd[1576]: 2026-03-03 14:03:26.699 [INFO][4521] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="80cc1a69152219a6585e337216614fe80e750388ee2749c94c1cde06389d5e70" Namespace="calico-system" Pod="calico-apiserver-6677b978bd-vb4zq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6677b978bd--vb4zq-eth0" Mar 3 14:03:26.825204 containerd[1576]: 2026-03-03 14:03:26.699 [INFO][4521] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali32ffe152c9e ContainerID="80cc1a69152219a6585e337216614fe80e750388ee2749c94c1cde06389d5e70" Namespace="calico-system" Pod="calico-apiserver-6677b978bd-vb4zq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6677b978bd--vb4zq-eth0" Mar 3 14:03:26.825204 containerd[1576]: 2026-03-03 14:03:26.729 [INFO][4521] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="80cc1a69152219a6585e337216614fe80e750388ee2749c94c1cde06389d5e70" Namespace="calico-system" Pod="calico-apiserver-6677b978bd-vb4zq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6677b978bd--vb4zq-eth0" Mar 3 14:03:26.825204 containerd[1576]: 2026-03-03 14:03:26.732 [INFO][4521] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="80cc1a69152219a6585e337216614fe80e750388ee2749c94c1cde06389d5e70" Namespace="calico-system" Pod="calico-apiserver-6677b978bd-vb4zq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6677b978bd--vb4zq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6677b978bd--vb4zq-eth0", GenerateName:"calico-apiserver-6677b978bd-", Namespace:"calico-system", SelfLink:"", UID:"61665e7b-9fb3-4659-b57b-d6e2b5ad54ac", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 14, 2, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6677b978bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"80cc1a69152219a6585e337216614fe80e750388ee2749c94c1cde06389d5e70", Pod:"calico-apiserver-6677b978bd-vb4zq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali32ffe152c9e", MAC:"5e:9f:d1:2f:49:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 14:03:26.825204 containerd[1576]: 2026-03-03 14:03:26.785 [INFO][4521] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="80cc1a69152219a6585e337216614fe80e750388ee2749c94c1cde06389d5e70" Namespace="calico-system" Pod="calico-apiserver-6677b978bd-vb4zq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6677b978bd--vb4zq-eth0" Mar 3 14:03:26.877354 systemd[1]: Started cri-containerd-d44924dac5caadc6c6722935184610d5f334923ecb854d7146a1dc931604c8ac.scope - libcontainer container d44924dac5caadc6c6722935184610d5f334923ecb854d7146a1dc931604c8ac. Mar 3 14:03:27.027390 systemd[1]: Started cri-containerd-e2d33dad39993a76e9bc0cf082475e6ff252964579095b4b60edc27e7752a578.scope - libcontainer container e2d33dad39993a76e9bc0cf082475e6ff252964579095b4b60edc27e7752a578. Mar 3 14:03:27.084258 containerd[1576]: time="2026-03-03T14:03:27.084206091Z" level=info msg="connecting to shim 80cc1a69152219a6585e337216614fe80e750388ee2749c94c1cde06389d5e70" address="unix:///run/containerd/s/6b85f77381b071b224ca19f0d73478b4b3c1aa973f61d71576a487808c5e1c75" namespace=k8s.io protocol=ttrpc version=3 Mar 3 14:03:27.244150 kubelet[2855]: E0303 14:03:27.241360 2855 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:03:27.252209 kubelet[2855]: E0303 14:03:27.251244 2855 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:03:27.280146 containerd[1576]: time="2026-03-03T14:03:27.279522017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-8dpx6,Uid:a6da2aeb-b230-45f3-8292-e23a3c17d60c,Namespace:kube-system,Attempt:0,}" Mar 3 14:03:27.317120 systemd-networkd[1477]: caliedf6700cdd1: Gained IPv6LL Mar 3 14:03:27.421476 systemd-resolved[1395]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 3 14:03:27.430350 systemd-resolved[1395]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 3 14:03:27.471859 systemd[1]: Started cri-containerd-80cc1a69152219a6585e337216614fe80e750388ee2749c94c1cde06389d5e70.scope - libcontainer container 80cc1a69152219a6585e337216614fe80e750388ee2749c94c1cde06389d5e70. Mar 3 14:03:27.620206 systemd-resolved[1395]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 3 14:03:27.769416 containerd[1576]: time="2026-03-03T14:03:27.768306050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-vvdpm,Uid:39dff308-ff46-444b-bddd-0b45b42e0715,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2d33dad39993a76e9bc0cf082475e6ff252964579095b4b60edc27e7752a578\"" Mar 3 14:03:27.789963 kubelet[2855]: E0303 14:03:27.788435 2855 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:03:27.868914 containerd[1576]: time="2026-03-03T14:03:27.866289213Z" level=info msg="CreateContainer within sandbox \"e2d33dad39993a76e9bc0cf082475e6ff252964579095b4b60edc27e7752a578\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 3 14:03:27.931966 systemd-networkd[1477]: cali6ed234b3f65: Link UP Mar 3 14:03:27.941053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount555007277.mount: Deactivated successfully. Mar 3 14:03:27.957943 containerd[1576]: time="2026-03-03T14:03:27.956080964Z" level=info msg="Container 927d62a02218065f465f4e78611b149846b1560c131f9307b1dd04736206e3df: CDI devices from CRI Config.CDIDevices: []" Mar 3 14:03:27.960173 systemd-networkd[1477]: cali6ed234b3f65: Gained carrier Mar 3 14:03:28.011053 containerd[1576]: time="2026-03-03T14:03:28.010993124Z" level=info msg="CreateContainer within sandbox \"e2d33dad39993a76e9bc0cf082475e6ff252964579095b4b60edc27e7752a578\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"927d62a02218065f465f4e78611b149846b1560c131f9307b1dd04736206e3df\"" Mar 3 14:03:28.038966 containerd[1576]: time="2026-03-03T14:03:28.034421956Z" level=info msg="StartContainer for \"927d62a02218065f465f4e78611b149846b1560c131f9307b1dd04736206e3df\"" Mar 3 14:03:28.042235 containerd[1576]: time="2026-03-03T14:03:28.039342769Z" level=info msg="connecting to shim 927d62a02218065f465f4e78611b149846b1560c131f9307b1dd04736206e3df" address="unix:///run/containerd/s/665f570a0ba79d56723c7792aabdfb068ebe4c9ae810a47ec7c30055dbcfe530" protocol=ttrpc version=3 Mar 3 14:03:28.091404 systemd-networkd[1477]: cali32ffe152c9e: Gained IPv6LL Mar 3 14:03:28.258087 containerd[1576]: 2026-03-03 14:03:27.258 [INFO][4599] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--9f7667bb8--wxxg6-eth0 goldmane-9f7667bb8- calico-system 2c967439-ed8f-40a6-ac45-3c8bab198902 1063 0 2026-03-03 14:02:10 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-9f7667bb8-wxxg6 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali6ed234b3f65 [] [] }} ContainerID="8eb903e0c8d8132808c210c45333c977b823f14959bbc6e79d0a75f9fc058606" Namespace="calico-system" Pod="goldmane-9f7667bb8-wxxg6" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--wxxg6-" Mar 3 14:03:28.258087 containerd[1576]: 2026-03-03 14:03:27.260 [INFO][4599] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8eb903e0c8d8132808c210c45333c977b823f14959bbc6e79d0a75f9fc058606" Namespace="calico-system" Pod="goldmane-9f7667bb8-wxxg6" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--wxxg6-eth0" Mar 3 14:03:28.258087 containerd[1576]: 2026-03-03 14:03:27.588 [INFO][4743] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8eb903e0c8d8132808c210c45333c977b823f14959bbc6e79d0a75f9fc058606" HandleID="k8s-pod-network.8eb903e0c8d8132808c210c45333c977b823f14959bbc6e79d0a75f9fc058606" Workload="localhost-k8s-goldmane--9f7667bb8--wxxg6-eth0" Mar 3 14:03:28.258087 containerd[1576]: 2026-03-03 14:03:27.626 [INFO][4743] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8eb903e0c8d8132808c210c45333c977b823f14959bbc6e79d0a75f9fc058606" HandleID="k8s-pod-network.8eb903e0c8d8132808c210c45333c977b823f14959bbc6e79d0a75f9fc058606" Workload="localhost-k8s-goldmane--9f7667bb8--wxxg6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00019e700), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-9f7667bb8-wxxg6", "timestamp":"2026-03-03 14:03:27.588384634 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00030f340)} Mar 3 14:03:28.258087 containerd[1576]: 2026-03-03 14:03:27.626 [INFO][4743] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 14:03:28.258087 containerd[1576]: 2026-03-03 14:03:27.626 [INFO][4743] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 14:03:28.258087 containerd[1576]: 2026-03-03 14:03:27.626 [INFO][4743] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 3 14:03:28.258087 containerd[1576]: 2026-03-03 14:03:27.635 [INFO][4743] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8eb903e0c8d8132808c210c45333c977b823f14959bbc6e79d0a75f9fc058606" host="localhost" Mar 3 14:03:28.258087 containerd[1576]: 2026-03-03 14:03:27.676 [INFO][4743] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 3 14:03:28.258087 containerd[1576]: 2026-03-03 14:03:27.724 [INFO][4743] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 3 14:03:28.258087 containerd[1576]: 2026-03-03 14:03:27.730 [INFO][4743] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 3 14:03:28.258087 containerd[1576]: 2026-03-03 14:03:27.743 [INFO][4743] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 3 14:03:28.258087 containerd[1576]: 2026-03-03 14:03:27.746 [INFO][4743] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8eb903e0c8d8132808c210c45333c977b823f14959bbc6e79d0a75f9fc058606" host="localhost" Mar 3 14:03:28.258087 containerd[1576]: 2026-03-03 14:03:27.760 [INFO][4743] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8eb903e0c8d8132808c210c45333c977b823f14959bbc6e79d0a75f9fc058606 Mar 3 14:03:28.258087 containerd[1576]: 2026-03-03 14:03:27.794 [INFO][4743] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8eb903e0c8d8132808c210c45333c977b823f14959bbc6e79d0a75f9fc058606" host="localhost" Mar 3 14:03:28.258087 containerd[1576]: 2026-03-03 14:03:27.854 [INFO][4743] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.8eb903e0c8d8132808c210c45333c977b823f14959bbc6e79d0a75f9fc058606" host="localhost" Mar 3 14:03:28.258087 containerd[1576]: 2026-03-03 14:03:27.855 [INFO][4743] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.8eb903e0c8d8132808c210c45333c977b823f14959bbc6e79d0a75f9fc058606" host="localhost" Mar 3 14:03:28.258087 containerd[1576]: 2026-03-03 14:03:27.855 [INFO][4743] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 14:03:28.258087 containerd[1576]: 2026-03-03 14:03:27.855 [INFO][4743] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="8eb903e0c8d8132808c210c45333c977b823f14959bbc6e79d0a75f9fc058606" HandleID="k8s-pod-network.8eb903e0c8d8132808c210c45333c977b823f14959bbc6e79d0a75f9fc058606" Workload="localhost-k8s-goldmane--9f7667bb8--wxxg6-eth0" Mar 3 14:03:28.265855 containerd[1576]: 2026-03-03 14:03:27.893 [INFO][4599] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8eb903e0c8d8132808c210c45333c977b823f14959bbc6e79d0a75f9fc058606" Namespace="calico-system" Pod="goldmane-9f7667bb8-wxxg6" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--wxxg6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--wxxg6-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"2c967439-ed8f-40a6-ac45-3c8bab198902", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 14, 2, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-9f7667bb8-wxxg6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6ed234b3f65", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 14:03:28.265855 containerd[1576]: 2026-03-03 14:03:27.896 [INFO][4599] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="8eb903e0c8d8132808c210c45333c977b823f14959bbc6e79d0a75f9fc058606" Namespace="calico-system" Pod="goldmane-9f7667bb8-wxxg6" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--wxxg6-eth0" Mar 3 14:03:28.265855 containerd[1576]: 2026-03-03 14:03:27.897 [INFO][4599] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6ed234b3f65 ContainerID="8eb903e0c8d8132808c210c45333c977b823f14959bbc6e79d0a75f9fc058606" Namespace="calico-system" Pod="goldmane-9f7667bb8-wxxg6" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--wxxg6-eth0" Mar 3 14:03:28.265855 containerd[1576]: 2026-03-03 14:03:28.021 [INFO][4599] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8eb903e0c8d8132808c210c45333c977b823f14959bbc6e79d0a75f9fc058606" Namespace="calico-system" Pod="goldmane-9f7667bb8-wxxg6" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--wxxg6-eth0" Mar 3 14:03:28.265855 containerd[1576]: 2026-03-03 14:03:28.036 [INFO][4599] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8eb903e0c8d8132808c210c45333c977b823f14959bbc6e79d0a75f9fc058606" Namespace="calico-system" Pod="goldmane-9f7667bb8-wxxg6" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--wxxg6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--wxxg6-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"2c967439-ed8f-40a6-ac45-3c8bab198902", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 14, 2, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8eb903e0c8d8132808c210c45333c977b823f14959bbc6e79d0a75f9fc058606", Pod:"goldmane-9f7667bb8-wxxg6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6ed234b3f65", MAC:"ee:dd:e1:b0:18:74", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 14:03:28.265855 containerd[1576]: 2026-03-03 14:03:28.119 [INFO][4599] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8eb903e0c8d8132808c210c45333c977b823f14959bbc6e79d0a75f9fc058606" Namespace="calico-system" Pod="goldmane-9f7667bb8-wxxg6" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--wxxg6-eth0" Mar 3 14:03:28.280281 systemd-networkd[1477]: cali22a70c96c29: Gained IPv6LL Mar 3 14:03:28.370197 containerd[1576]: time="2026-03-03T14:03:28.367039280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6677b978bd-vb4zq,Uid:61665e7b-9fb3-4659-b57b-d6e2b5ad54ac,Namespace:calico-system,Attempt:0,} returns sandbox id \"80cc1a69152219a6585e337216614fe80e750388ee2749c94c1cde06389d5e70\"" Mar 3 14:03:28.490469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2300450208.mount: Deactivated successfully. Mar 3 14:03:28.504862 containerd[1576]: time="2026-03-03T14:03:28.496982466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6df4cc67f5-nnxg6,Uid:6516ca51-7391-44dd-b25b-3ff46412e8d5,Namespace:calico-system,Attempt:0,} returns sandbox id \"d44924dac5caadc6c6722935184610d5f334923ecb854d7146a1dc931604c8ac\"" Mar 3 14:03:28.511966 systemd[1]: Started cri-containerd-927d62a02218065f465f4e78611b149846b1560c131f9307b1dd04736206e3df.scope - libcontainer container 927d62a02218065f465f4e78611b149846b1560c131f9307b1dd04736206e3df. Mar 3 14:03:28.728426 containerd[1576]: time="2026-03-03T14:03:28.728206401Z" level=info msg="connecting to shim 8eb903e0c8d8132808c210c45333c977b823f14959bbc6e79d0a75f9fc058606" address="unix:///run/containerd/s/d6a6fc14e1499b85979aaf63504deb96836c9594c829e880d0a70324d4efdd01" namespace=k8s.io protocol=ttrpc version=3 Mar 3 14:03:28.961544 containerd[1576]: time="2026-03-03T14:03:28.959541405Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:03:28.973929 containerd[1576]: time="2026-03-03T14:03:28.960970368Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 3 14:03:28.981882 containerd[1576]: time="2026-03-03T14:03:28.980000908Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:03:29.028151 systemd[1]: Started cri-containerd-8eb903e0c8d8132808c210c45333c977b823f14959bbc6e79d0a75f9fc058606.scope - libcontainer container 8eb903e0c8d8132808c210c45333c977b823f14959bbc6e79d0a75f9fc058606. Mar 3 14:03:29.078544 containerd[1576]: time="2026-03-03T14:03:29.076326477Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:03:29.079543 containerd[1576]: time="2026-03-03T14:03:29.079320456Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 8.141220487s" Mar 3 14:03:29.079543 containerd[1576]: time="2026-03-03T14:03:29.079368193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 3 14:03:29.094505 containerd[1576]: time="2026-03-03T14:03:29.093091478Z" level=info msg="StartContainer for \"927d62a02218065f465f4e78611b149846b1560c131f9307b1dd04736206e3df\" returns successfully" Mar 3 14:03:29.126493 containerd[1576]: time="2026-03-03T14:03:29.123071766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 3 14:03:29.292968 containerd[1576]: time="2026-03-03T14:03:29.192543506Z" level=info msg="CreateContainer within sandbox \"fccf9d66ee354662819af4485e6aa3afa8835ea8d3024c2ec61df947c891960c\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 3 14:03:29.283406 systemd-networkd[1477]: cali6ed234b3f65: Gained IPv6LL Mar 3 14:03:29.300464 systemd-resolved[1395]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 3 14:03:29.337931 systemd-networkd[1477]: calie497dc6781d: Link UP Mar 3 14:03:29.344257 systemd-networkd[1477]: calie497dc6781d: Gained carrier Mar 3 14:03:29.349757 containerd[1576]: time="2026-03-03T14:03:29.348878871Z" level=info msg="Container eccb0db8fc56237c904a08512c839140d5137f19746d82465f90f7898b0a0460: CDI devices from CRI Config.CDIDevices: []" Mar 3 14:03:29.412865 containerd[1576]: time="2026-03-03T14:03:29.409351838Z" level=info msg="CreateContainer within sandbox \"fccf9d66ee354662819af4485e6aa3afa8835ea8d3024c2ec61df947c891960c\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"eccb0db8fc56237c904a08512c839140d5137f19746d82465f90f7898b0a0460\"" Mar 3 14:03:29.425524 containerd[1576]: time="2026-03-03T14:03:29.424298037Z" level=info msg="StartContainer for \"eccb0db8fc56237c904a08512c839140d5137f19746d82465f90f7898b0a0460\"" Mar 3 14:03:29.474486 containerd[1576]: time="2026-03-03T14:03:29.474222885Z" level=info msg="connecting to shim eccb0db8fc56237c904a08512c839140d5137f19746d82465f90f7898b0a0460" address="unix:///run/containerd/s/6d98233bb67de240f907f5b5c62d79413eb6b017b69ed2db79723ff9a4aec023" protocol=ttrpc version=3 Mar 3 14:03:29.531897 containerd[1576]: 2026-03-03 14:03:28.068 [INFO][4755] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--8dpx6-eth0 coredns-7d764666f9- kube-system a6da2aeb-b230-45f3-8292-e23a3c17d60c 1057 0 2026-03-03 14:01:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-8dpx6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie497dc6781d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="1378c0c98474e6a47975cee343688ab6559b1a98d99637be18cbaefe70a7b18a" Namespace="kube-system" Pod="coredns-7d764666f9-8dpx6" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--8dpx6-" Mar 3 14:03:29.531897 containerd[1576]: 2026-03-03 14:03:28.068 [INFO][4755] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1378c0c98474e6a47975cee343688ab6559b1a98d99637be18cbaefe70a7b18a" Namespace="kube-system" Pod="coredns-7d764666f9-8dpx6" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--8dpx6-eth0" Mar 3 14:03:29.531897 containerd[1576]: 2026-03-03 14:03:28.632 [INFO][4814] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1378c0c98474e6a47975cee343688ab6559b1a98d99637be18cbaefe70a7b18a" HandleID="k8s-pod-network.1378c0c98474e6a47975cee343688ab6559b1a98d99637be18cbaefe70a7b18a" Workload="localhost-k8s-coredns--7d764666f9--8dpx6-eth0" Mar 3 14:03:29.531897 containerd[1576]: 2026-03-03 14:03:28.691 [INFO][4814] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1378c0c98474e6a47975cee343688ab6559b1a98d99637be18cbaefe70a7b18a" HandleID="k8s-pod-network.1378c0c98474e6a47975cee343688ab6559b1a98d99637be18cbaefe70a7b18a" Workload="localhost-k8s-coredns--7d764666f9--8dpx6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ee510), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-8dpx6", "timestamp":"2026-03-03 14:03:28.632974256 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000517760)} Mar 3 14:03:29.531897 containerd[1576]: 2026-03-03 14:03:28.692 [INFO][4814] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 14:03:29.531897 containerd[1576]: 2026-03-03 14:03:28.696 [INFO][4814] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 14:03:29.531897 containerd[1576]: 2026-03-03 14:03:28.697 [INFO][4814] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 3 14:03:29.531897 containerd[1576]: 2026-03-03 14:03:28.743 [INFO][4814] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1378c0c98474e6a47975cee343688ab6559b1a98d99637be18cbaefe70a7b18a" host="localhost" Mar 3 14:03:29.531897 containerd[1576]: 2026-03-03 14:03:28.798 [INFO][4814] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 3 14:03:29.531897 containerd[1576]: 2026-03-03 14:03:28.905 [INFO][4814] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 3 14:03:29.531897 containerd[1576]: 2026-03-03 14:03:28.929 [INFO][4814] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 3 14:03:29.531897 containerd[1576]: 2026-03-03 14:03:28.954 [INFO][4814] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 3 14:03:29.531897 containerd[1576]: 2026-03-03 14:03:28.957 [INFO][4814] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1378c0c98474e6a47975cee343688ab6559b1a98d99637be18cbaefe70a7b18a" host="localhost" Mar 3 14:03:29.531897 containerd[1576]: 2026-03-03 14:03:28.971 [INFO][4814] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1378c0c98474e6a47975cee343688ab6559b1a98d99637be18cbaefe70a7b18a Mar 3 14:03:29.531897 containerd[1576]: 2026-03-03 14:03:29.013 [INFO][4814] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1378c0c98474e6a47975cee343688ab6559b1a98d99637be18cbaefe70a7b18a" host="localhost" Mar 3 14:03:29.531897 containerd[1576]: 2026-03-03 14:03:29.078 [INFO][4814] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.1378c0c98474e6a47975cee343688ab6559b1a98d99637be18cbaefe70a7b18a" host="localhost" Mar 3 14:03:29.531897 containerd[1576]: 2026-03-03 14:03:29.087 [INFO][4814] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.1378c0c98474e6a47975cee343688ab6559b1a98d99637be18cbaefe70a7b18a" host="localhost" Mar 3 14:03:29.531897 containerd[1576]: 2026-03-03 14:03:29.088 [INFO][4814] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 14:03:29.531897 containerd[1576]: 2026-03-03 14:03:29.088 [INFO][4814] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="1378c0c98474e6a47975cee343688ab6559b1a98d99637be18cbaefe70a7b18a" HandleID="k8s-pod-network.1378c0c98474e6a47975cee343688ab6559b1a98d99637be18cbaefe70a7b18a" Workload="localhost-k8s-coredns--7d764666f9--8dpx6-eth0" Mar 3 14:03:29.538297 containerd[1576]: 2026-03-03 14:03:29.302 [INFO][4755] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1378c0c98474e6a47975cee343688ab6559b1a98d99637be18cbaefe70a7b18a" Namespace="kube-system" Pod="coredns-7d764666f9-8dpx6" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--8dpx6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--8dpx6-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"a6da2aeb-b230-45f3-8292-e23a3c17d60c", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 14, 1, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-8dpx6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie497dc6781d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 14:03:29.538297 containerd[1576]: 2026-03-03 14:03:29.302 [INFO][4755] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="1378c0c98474e6a47975cee343688ab6559b1a98d99637be18cbaefe70a7b18a" Namespace="kube-system" Pod="coredns-7d764666f9-8dpx6" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--8dpx6-eth0" Mar 3 14:03:29.538297 containerd[1576]: 2026-03-03 14:03:29.302 [INFO][4755] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie497dc6781d ContainerID="1378c0c98474e6a47975cee343688ab6559b1a98d99637be18cbaefe70a7b18a" Namespace="kube-system" Pod="coredns-7d764666f9-8dpx6" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--8dpx6-eth0" Mar 3 14:03:29.538297 containerd[1576]: 2026-03-03 14:03:29.352 [INFO][4755] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1378c0c98474e6a47975cee343688ab6559b1a98d99637be18cbaefe70a7b18a" Namespace="kube-system" Pod="coredns-7d764666f9-8dpx6" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--8dpx6-eth0" Mar 3 14:03:29.538297 containerd[1576]: 2026-03-03 14:03:29.355 [INFO][4755] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1378c0c98474e6a47975cee343688ab6559b1a98d99637be18cbaefe70a7b18a" Namespace="kube-system" Pod="coredns-7d764666f9-8dpx6" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--8dpx6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--8dpx6-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"a6da2aeb-b230-45f3-8292-e23a3c17d60c", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 14, 1, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1378c0c98474e6a47975cee343688ab6559b1a98d99637be18cbaefe70a7b18a", Pod:"coredns-7d764666f9-8dpx6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie497dc6781d", MAC:"ba:28:d0:55:95:1e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 14:03:29.538297 containerd[1576]: 2026-03-03 14:03:29.470 [INFO][4755] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1378c0c98474e6a47975cee343688ab6559b1a98d99637be18cbaefe70a7b18a" Namespace="kube-system" Pod="coredns-7d764666f9-8dpx6" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--8dpx6-eth0" Mar 3 14:03:29.762470 systemd[1]: Started cri-containerd-eccb0db8fc56237c904a08512c839140d5137f19746d82465f90f7898b0a0460.scope - libcontainer container eccb0db8fc56237c904a08512c839140d5137f19746d82465f90f7898b0a0460. Mar 3 14:03:29.802532 containerd[1576]: time="2026-03-03T14:03:29.801254926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-wxxg6,Uid:2c967439-ed8f-40a6-ac45-3c8bab198902,Namespace:calico-system,Attempt:0,} returns sandbox id \"8eb903e0c8d8132808c210c45333c977b823f14959bbc6e79d0a75f9fc058606\"" Mar 3 14:03:29.962518 containerd[1576]: time="2026-03-03T14:03:29.962404556Z" level=info msg="connecting to shim 1378c0c98474e6a47975cee343688ab6559b1a98d99637be18cbaefe70a7b18a" address="unix:///run/containerd/s/1e5e1c87d8d549f6f97473debfc41f47b09845d1c718c99245c9afc1e91cc468" namespace=k8s.io protocol=ttrpc version=3 Mar 3 14:03:30.262343 kubelet[2855]: E0303 14:03:30.261024 2855 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:03:30.296176 systemd[1]: Started cri-containerd-1378c0c98474e6a47975cee343688ab6559b1a98d99637be18cbaefe70a7b18a.scope - libcontainer container 1378c0c98474e6a47975cee343688ab6559b1a98d99637be18cbaefe70a7b18a. Mar 3 14:03:30.504980 containerd[1576]: time="2026-03-03T14:03:30.504530172Z" level=info msg="StartContainer for \"eccb0db8fc56237c904a08512c839140d5137f19746d82465f90f7898b0a0460\" returns successfully" Mar 3 14:03:30.517554 systemd-resolved[1395]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 3 14:03:30.763305 containerd[1576]: time="2026-03-03T14:03:30.762441676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-8dpx6,Uid:a6da2aeb-b230-45f3-8292-e23a3c17d60c,Namespace:kube-system,Attempt:0,} returns sandbox id \"1378c0c98474e6a47975cee343688ab6559b1a98d99637be18cbaefe70a7b18a\"" Mar 3 14:03:30.783240 kubelet[2855]: E0303 14:03:30.782140 2855 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:03:30.817161 containerd[1576]: time="2026-03-03T14:03:30.816054297Z" level=info msg="CreateContainer within sandbox \"1378c0c98474e6a47975cee343688ab6559b1a98d99637be18cbaefe70a7b18a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 3 14:03:30.941371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount985939939.mount: Deactivated successfully. Mar 3 14:03:30.947477 containerd[1576]: time="2026-03-03T14:03:30.946353168Z" level=info msg="Container 7a7cd39cb2554d0879a461c70c00c323f7ecec171b6183c2fffb3f65f96259ac: CDI devices from CRI Config.CDIDevices: []" Mar 3 14:03:31.014166 containerd[1576]: time="2026-03-03T14:03:31.012894891Z" level=info msg="CreateContainer within sandbox \"1378c0c98474e6a47975cee343688ab6559b1a98d99637be18cbaefe70a7b18a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7a7cd39cb2554d0879a461c70c00c323f7ecec171b6183c2fffb3f65f96259ac\"" Mar 3 14:03:31.019188 containerd[1576]: time="2026-03-03T14:03:31.018018965Z" level=info msg="StartContainer for \"7a7cd39cb2554d0879a461c70c00c323f7ecec171b6183c2fffb3f65f96259ac\"" Mar 3 14:03:31.035377 containerd[1576]: time="2026-03-03T14:03:31.035248515Z" level=info msg="connecting to shim 7a7cd39cb2554d0879a461c70c00c323f7ecec171b6183c2fffb3f65f96259ac" address="unix:///run/containerd/s/1e5e1c87d8d549f6f97473debfc41f47b09845d1c718c99245c9afc1e91cc468" protocol=ttrpc version=3 Mar 3 14:03:31.099438 systemd-networkd[1477]: calie497dc6781d: Gained IPv6LL Mar 3 14:03:31.342204 systemd[1]: Started cri-containerd-7a7cd39cb2554d0879a461c70c00c323f7ecec171b6183c2fffb3f65f96259ac.scope - libcontainer container 7a7cd39cb2554d0879a461c70c00c323f7ecec171b6183c2fffb3f65f96259ac. Mar 3 14:03:31.349263 kubelet[2855]: E0303 14:03:31.349121 2855 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:03:31.465078 kubelet[2855]: I0303 14:03:31.463247 2855 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-vvdpm" podStartSLOduration=118.463227944 podStartE2EDuration="1m58.463227944s" podCreationTimestamp="2026-03-03 14:01:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 14:03:30.396123136 +0000 UTC m=+117.276128799" watchObservedRunningTime="2026-03-03 14:03:31.463227944 +0000 UTC m=+118.343233617" Mar 3 14:03:31.465078 kubelet[2855]: I0303 14:03:31.463515 2855 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-7dd7567d64-pnd7p" podStartSLOduration=4.744677903 podStartE2EDuration="16.463509548s" podCreationTimestamp="2026-03-03 14:03:15 +0000 UTC" firstStartedPulling="2026-03-03 14:03:17.401251408 +0000 UTC m=+104.281257071" lastFinishedPulling="2026-03-03 14:03:29.120083052 +0000 UTC m=+116.000088716" observedRunningTime="2026-03-03 14:03:31.448434417 +0000 UTC m=+118.328440101" watchObservedRunningTime="2026-03-03 14:03:31.463509548 +0000 UTC m=+118.343515212" Mar 3 14:03:31.607454 containerd[1576]: time="2026-03-03T14:03:31.602518760Z" level=info msg="StartContainer for \"7a7cd39cb2554d0879a461c70c00c323f7ecec171b6183c2fffb3f65f96259ac\" returns successfully" Mar 3 14:03:32.314539 containerd[1576]: time="2026-03-03T14:03:32.314067878Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:03:32.315913 containerd[1576]: time="2026-03-03T14:03:32.315546087Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 3 14:03:32.322143 containerd[1576]: time="2026-03-03T14:03:32.322102904Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:03:32.332275 containerd[1576]: time="2026-03-03T14:03:32.331944684Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:03:32.335131 containerd[1576]: time="2026-03-03T14:03:32.334255151Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 3.211141507s" Mar 3 14:03:32.335131 containerd[1576]: time="2026-03-03T14:03:32.334830922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 3 14:03:32.341062 containerd[1576]: time="2026-03-03T14:03:32.340261233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 3 14:03:32.368131 containerd[1576]: time="2026-03-03T14:03:32.368083060Z" level=info msg="CreateContainer within sandbox \"8e45038501f036d0e5fd337a40abe4df6922ab7dbbdbce8fb418f0269ba26ce5\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 3 14:03:32.377702 kubelet[2855]: E0303 14:03:32.377252 2855 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:03:32.378994 kubelet[2855]: E0303 14:03:32.378123 2855 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:03:32.467240 containerd[1576]: time="2026-03-03T14:03:32.467181888Z" level=info msg="Container 17b86a09978cf99142b48a8048eea499488b5be5f80e7f60eaf5f31950b8656b: CDI devices from CRI Config.CDIDevices: []" Mar 3 14:03:32.520869 kubelet[2855]: I0303 14:03:32.517537 2855 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-8dpx6" podStartSLOduration=119.517523281 podStartE2EDuration="1m59.517523281s" podCreationTimestamp="2026-03-03 14:01:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 14:03:32.47008696 +0000 UTC m=+119.350092633" watchObservedRunningTime="2026-03-03 14:03:32.517523281 +0000 UTC m=+119.397528944" Mar 3 14:03:32.587124 containerd[1576]: time="2026-03-03T14:03:32.584502033Z" level=info msg="CreateContainer within sandbox \"8e45038501f036d0e5fd337a40abe4df6922ab7dbbdbce8fb418f0269ba26ce5\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"17b86a09978cf99142b48a8048eea499488b5be5f80e7f60eaf5f31950b8656b\"" Mar 3 14:03:32.593247 containerd[1576]: time="2026-03-03T14:03:32.592485241Z" level=info msg="StartContainer for \"17b86a09978cf99142b48a8048eea499488b5be5f80e7f60eaf5f31950b8656b\"" Mar 3 14:03:32.606542 containerd[1576]: time="2026-03-03T14:03:32.604074122Z" level=info msg="connecting to shim 17b86a09978cf99142b48a8048eea499488b5be5f80e7f60eaf5f31950b8656b" address="unix:///run/containerd/s/12d79802b83b36f25f3eaf6690da5a5c246db5c69fffc403bbc145fdd922b0a4" protocol=ttrpc version=3 Mar 3 14:03:32.771218 systemd[1]: Started cri-containerd-17b86a09978cf99142b48a8048eea499488b5be5f80e7f60eaf5f31950b8656b.scope - libcontainer container 17b86a09978cf99142b48a8048eea499488b5be5f80e7f60eaf5f31950b8656b. Mar 3 14:03:33.170549 containerd[1576]: time="2026-03-03T14:03:33.169971743Z" level=info msg="StartContainer for \"17b86a09978cf99142b48a8048eea499488b5be5f80e7f60eaf5f31950b8656b\" returns successfully" Mar 3 14:03:33.412885 kubelet[2855]: E0303 14:03:33.412366 2855 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:03:33.415559 kubelet[2855]: E0303 14:03:33.414051 2855 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:03:34.419523 kubelet[2855]: E0303 14:03:34.419455 2855 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:03:40.477017 containerd[1576]: time="2026-03-03T14:03:40.476220904Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:03:40.482934 containerd[1576]: time="2026-03-03T14:03:40.482387913Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 3 14:03:40.489090 containerd[1576]: time="2026-03-03T14:03:40.487475328Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:03:40.502157 containerd[1576]: time="2026-03-03T14:03:40.500877282Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:03:40.504444 containerd[1576]: time="2026-03-03T14:03:40.503961400Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 8.163669647s" Mar 3 14:03:40.504444 containerd[1576]: time="2026-03-03T14:03:40.504009719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 3 14:03:40.513160 containerd[1576]: time="2026-03-03T14:03:40.512060643Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 3 14:03:40.535095 containerd[1576]: time="2026-03-03T14:03:40.534203478Z" level=info msg="CreateContainer within sandbox \"e1d0b6ceef07e15693f678c496d60694d7ae9b83ea40e1c6d509aa44648e5678\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 3 14:03:40.589184 containerd[1576]: time="2026-03-03T14:03:40.589142170Z" level=info msg="Container 30d50d53065da2e135f7a3c07cf2dbd22a3d88690ed1181a47a59e11fd30ea6f: CDI devices from CRI Config.CDIDevices: []" Mar 3 14:03:40.637388 containerd[1576]: time="2026-03-03T14:03:40.637164080Z" level=info msg="CreateContainer within sandbox \"e1d0b6ceef07e15693f678c496d60694d7ae9b83ea40e1c6d509aa44648e5678\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"30d50d53065da2e135f7a3c07cf2dbd22a3d88690ed1181a47a59e11fd30ea6f\"" Mar 3 14:03:40.644215 containerd[1576]: time="2026-03-03T14:03:40.643307354Z" level=info msg="StartContainer for \"30d50d53065da2e135f7a3c07cf2dbd22a3d88690ed1181a47a59e11fd30ea6f\"" Mar 3 14:03:40.652198 containerd[1576]: time="2026-03-03T14:03:40.650991749Z" level=info msg="connecting to shim 30d50d53065da2e135f7a3c07cf2dbd22a3d88690ed1181a47a59e11fd30ea6f" address="unix:///run/containerd/s/f38e8ac46e7bdaa24ef382cd64e42bba692cd50b89a44aaf5555a684bf50cd13" protocol=ttrpc version=3 Mar 3 14:03:40.776418 containerd[1576]: time="2026-03-03T14:03:40.776271997Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:03:40.780194 containerd[1576]: time="2026-03-03T14:03:40.779867386Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 3 14:03:40.791271 containerd[1576]: time="2026-03-03T14:03:40.787092731Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 274.849477ms" Mar 3 14:03:40.791271 containerd[1576]: time="2026-03-03T14:03:40.787241947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 3 14:03:40.797234 containerd[1576]: time="2026-03-03T14:03:40.796199290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 3 14:03:40.813566 systemd[1]: Started cri-containerd-30d50d53065da2e135f7a3c07cf2dbd22a3d88690ed1181a47a59e11fd30ea6f.scope - libcontainer container 30d50d53065da2e135f7a3c07cf2dbd22a3d88690ed1181a47a59e11fd30ea6f. Mar 3 14:03:40.819175 containerd[1576]: time="2026-03-03T14:03:40.814869162Z" level=info msg="CreateContainer within sandbox \"80cc1a69152219a6585e337216614fe80e750388ee2749c94c1cde06389d5e70\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 3 14:03:40.920049 containerd[1576]: time="2026-03-03T14:03:40.918983974Z" level=info msg="Container 112df73d2cf7e1b2b6a4eab713a74315c4eb448d8f2264d7ab03950549665ab6: CDI devices from CRI Config.CDIDevices: []" Mar 3 14:03:40.969042 containerd[1576]: time="2026-03-03T14:03:40.967971820Z" level=info msg="CreateContainer within sandbox \"80cc1a69152219a6585e337216614fe80e750388ee2749c94c1cde06389d5e70\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"112df73d2cf7e1b2b6a4eab713a74315c4eb448d8f2264d7ab03950549665ab6\"" Mar 3 14:03:40.977945 containerd[1576]: time="2026-03-03T14:03:40.976262800Z" level=info msg="StartContainer for \"112df73d2cf7e1b2b6a4eab713a74315c4eb448d8f2264d7ab03950549665ab6\"" Mar 3 14:03:41.001876 containerd[1576]: time="2026-03-03T14:03:41.001213644Z" level=info msg="connecting to shim 112df73d2cf7e1b2b6a4eab713a74315c4eb448d8f2264d7ab03950549665ab6" address="unix:///run/containerd/s/6b85f77381b071b224ca19f0d73478b4b3c1aa973f61d71576a487808c5e1c75" protocol=ttrpc version=3 Mar 3 14:03:41.181056 systemd[1]: Started cri-containerd-112df73d2cf7e1b2b6a4eab713a74315c4eb448d8f2264d7ab03950549665ab6.scope - libcontainer container 112df73d2cf7e1b2b6a4eab713a74315c4eb448d8f2264d7ab03950549665ab6. Mar 3 14:03:41.341855 containerd[1576]: time="2026-03-03T14:03:41.337971240Z" level=info msg="StartContainer for \"30d50d53065da2e135f7a3c07cf2dbd22a3d88690ed1181a47a59e11fd30ea6f\" returns successfully" Mar 3 14:03:41.639461 kubelet[2855]: I0303 14:03:41.639043 2855 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-6677b978bd-gbpdl" podStartSLOduration=76.84199147 podStartE2EDuration="1m30.639019737s" podCreationTimestamp="2026-03-03 14:02:11 +0000 UTC" firstStartedPulling="2026-03-03 14:03:26.711211158 +0000 UTC m=+113.591216831" lastFinishedPulling="2026-03-03 14:03:40.508239435 +0000 UTC m=+127.388245098" observedRunningTime="2026-03-03 14:03:41.629180625 +0000 UTC m=+128.509186318" watchObservedRunningTime="2026-03-03 14:03:41.639019737 +0000 UTC m=+128.519025400" Mar 3 14:03:41.972992 containerd[1576]: time="2026-03-03T14:03:41.972156246Z" level=info msg="StartContainer for \"112df73d2cf7e1b2b6a4eab713a74315c4eb448d8f2264d7ab03950549665ab6\" returns successfully" Mar 3 14:03:42.686022 kubelet[2855]: I0303 14:03:42.683394 2855 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-6677b978bd-vb4zq" podStartSLOduration=79.297162427 podStartE2EDuration="1m31.683375263s" podCreationTimestamp="2026-03-03 14:02:11 +0000 UTC" firstStartedPulling="2026-03-03 14:03:28.405929718 +0000 UTC m=+115.285935381" lastFinishedPulling="2026-03-03 14:03:40.792142554 +0000 UTC m=+127.672148217" observedRunningTime="2026-03-03 14:03:42.679477916 +0000 UTC m=+129.559483600" watchObservedRunningTime="2026-03-03 14:03:42.683375263 +0000 UTC m=+129.563380926" Mar 3 14:03:49.813293 containerd[1576]: time="2026-03-03T14:03:49.813085239Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:03:49.818543 containerd[1576]: time="2026-03-03T14:03:49.818498607Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 3 14:03:49.827405 containerd[1576]: time="2026-03-03T14:03:49.827277121Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 9.031035131s" Mar 3 14:03:49.827405 containerd[1576]: time="2026-03-03T14:03:49.827386034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 3 14:03:49.837993 containerd[1576]: time="2026-03-03T14:03:49.837941185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 3 14:03:49.869739 containerd[1576]: time="2026-03-03T14:03:49.869029259Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:03:49.870323 containerd[1576]: time="2026-03-03T14:03:49.870228622Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:03:49.881725 containerd[1576]: time="2026-03-03T14:03:49.880861457Z" level=info msg="CreateContainer within sandbox \"d44924dac5caadc6c6722935184610d5f334923ecb854d7146a1dc931604c8ac\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 3 14:03:49.911380 containerd[1576]: time="2026-03-03T14:03:49.911207004Z" level=info msg="Container f9f02366f5b26f3b24fe4889fe0bca3033fde54acfb17ca907dbadeab3fa5764: CDI devices from CRI Config.CDIDevices: []" Mar 3 14:03:49.934241 containerd[1576]: time="2026-03-03T14:03:49.934012071Z" level=info msg="CreateContainer within sandbox \"d44924dac5caadc6c6722935184610d5f334923ecb854d7146a1dc931604c8ac\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f9f02366f5b26f3b24fe4889fe0bca3033fde54acfb17ca907dbadeab3fa5764\"" Mar 3 14:03:49.935966 containerd[1576]: time="2026-03-03T14:03:49.935876322Z" level=info msg="StartContainer for \"f9f02366f5b26f3b24fe4889fe0bca3033fde54acfb17ca907dbadeab3fa5764\"" Mar 3 14:03:49.939520 containerd[1576]: time="2026-03-03T14:03:49.939432702Z" level=info msg="connecting to shim f9f02366f5b26f3b24fe4889fe0bca3033fde54acfb17ca907dbadeab3fa5764" address="unix:///run/containerd/s/0364e4d775ca7eb60037ab92ed9459d379cac960f98ed634a341671ca04bbaed" protocol=ttrpc version=3 Mar 3 14:03:50.018194 systemd[1]: Started cri-containerd-f9f02366f5b26f3b24fe4889fe0bca3033fde54acfb17ca907dbadeab3fa5764.scope - libcontainer container f9f02366f5b26f3b24fe4889fe0bca3033fde54acfb17ca907dbadeab3fa5764. Mar 3 14:03:50.151787 containerd[1576]: time="2026-03-03T14:03:50.151312580Z" level=info msg="StartContainer for \"f9f02366f5b26f3b24fe4889fe0bca3033fde54acfb17ca907dbadeab3fa5764\" returns successfully" Mar 3 14:03:50.788321 kubelet[2855]: I0303 14:03:50.788257 2855 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6df4cc67f5-nnxg6" podStartSLOduration=77.469426391 podStartE2EDuration="1m38.788240326s" podCreationTimestamp="2026-03-03 14:02:12 +0000 UTC" firstStartedPulling="2026-03-03 14:03:28.515074145 +0000 UTC m=+115.395079807" lastFinishedPulling="2026-03-03 14:03:49.83388808 +0000 UTC m=+136.713893742" observedRunningTime="2026-03-03 14:03:50.7881949 +0000 UTC m=+137.668200573" watchObservedRunningTime="2026-03-03 14:03:50.788240326 +0000 UTC m=+137.668245989" Mar 3 14:03:53.336032 systemd[1]: Started sshd@9-10.0.0.115:22-10.0.0.1:46838.service - OpenSSH per-connection server daemon (10.0.0.1:46838). Mar 3 14:03:53.574510 sshd[5354]: Accepted publickey for core from 10.0.0.1 port 46838 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 14:03:53.583508 sshd-session[5354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:03:53.605084 systemd-logind[1548]: New session 10 of user core. Mar 3 14:03:53.618265 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 3 14:03:54.156701 sshd[5368]: Connection closed by 10.0.0.1 port 46838 Mar 3 14:03:54.157190 sshd-session[5354]: pam_unix(sshd:session): session closed for user core Mar 3 14:03:54.163293 systemd-logind[1548]: Session 10 logged out. Waiting for processes to exit. Mar 3 14:03:54.163685 systemd[1]: sshd@9-10.0.0.115:22-10.0.0.1:46838.service: Deactivated successfully. Mar 3 14:03:54.166779 systemd[1]: session-10.scope: Deactivated successfully. Mar 3 14:03:54.170105 systemd-logind[1548]: Removed session 10. Mar 3 14:03:54.494057 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount155474934.mount: Deactivated successfully. Mar 3 14:03:55.443121 containerd[1576]: time="2026-03-03T14:03:55.443044649Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:03:55.450292 containerd[1576]: time="2026-03-03T14:03:55.450101031Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 3 14:03:55.452867 containerd[1576]: time="2026-03-03T14:03:55.452779881Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:03:55.457615 containerd[1576]: time="2026-03-03T14:03:55.457541396Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:03:55.458986 containerd[1576]: time="2026-03-03T14:03:55.458952245Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 5.619083082s" Mar 3 14:03:55.459060 containerd[1576]: time="2026-03-03T14:03:55.458991231Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 3 14:03:55.460883 containerd[1576]: time="2026-03-03T14:03:55.460861505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 3 14:03:55.484435 containerd[1576]: time="2026-03-03T14:03:55.484009800Z" level=info msg="CreateContainer within sandbox \"8eb903e0c8d8132808c210c45333c977b823f14959bbc6e79d0a75f9fc058606\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 3 14:03:55.499080 containerd[1576]: time="2026-03-03T14:03:55.497430615Z" level=info msg="Container 7c0d938fbfb33342fa8cd2e3d9436dfa9d6d2326ca0bf82b00a44fc50151572f: CDI devices from CRI Config.CDIDevices: []" Mar 3 14:03:55.516680 containerd[1576]: time="2026-03-03T14:03:55.516448578Z" level=info msg="CreateContainer within sandbox \"8eb903e0c8d8132808c210c45333c977b823f14959bbc6e79d0a75f9fc058606\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"7c0d938fbfb33342fa8cd2e3d9436dfa9d6d2326ca0bf82b00a44fc50151572f\"" Mar 3 14:03:55.518683 containerd[1576]: time="2026-03-03T14:03:55.517978581Z" level=info msg="StartContainer for \"7c0d938fbfb33342fa8cd2e3d9436dfa9d6d2326ca0bf82b00a44fc50151572f\"" Mar 3 14:03:55.520322 containerd[1576]: time="2026-03-03T14:03:55.520238095Z" level=info msg="connecting to shim 7c0d938fbfb33342fa8cd2e3d9436dfa9d6d2326ca0bf82b00a44fc50151572f" address="unix:///run/containerd/s/d6a6fc14e1499b85979aaf63504deb96836c9594c829e880d0a70324d4efdd01" protocol=ttrpc version=3 Mar 3 14:03:55.565234 systemd[1]: Started cri-containerd-7c0d938fbfb33342fa8cd2e3d9436dfa9d6d2326ca0bf82b00a44fc50151572f.scope - libcontainer container 7c0d938fbfb33342fa8cd2e3d9436dfa9d6d2326ca0bf82b00a44fc50151572f. Mar 3 14:03:55.720212 containerd[1576]: time="2026-03-03T14:03:55.719887481Z" level=info msg="StartContainer for \"7c0d938fbfb33342fa8cd2e3d9436dfa9d6d2326ca0bf82b00a44fc50151572f\" returns successfully" Mar 3 14:03:55.866894 kubelet[2855]: I0303 14:03:55.866372 2855 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-wxxg6" podStartSLOduration=80.236072009 podStartE2EDuration="1m45.866350581s" podCreationTimestamp="2026-03-03 14:02:10 +0000 UTC" firstStartedPulling="2026-03-03 14:03:29.829919688 +0000 UTC m=+116.709925350" lastFinishedPulling="2026-03-03 14:03:55.460198259 +0000 UTC m=+142.340203922" observedRunningTime="2026-03-03 14:03:55.852058169 +0000 UTC m=+142.732063842" watchObservedRunningTime="2026-03-03 14:03:55.866350581 +0000 UTC m=+142.746356374" Mar 3 14:03:56.600811 containerd[1576]: time="2026-03-03T14:03:56.600762926Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:03:56.603283 containerd[1576]: time="2026-03-03T14:03:56.603166433Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 3 14:03:56.605182 containerd[1576]: time="2026-03-03T14:03:56.605009659Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:03:56.609833 containerd[1576]: time="2026-03-03T14:03:56.609292171Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 14:03:56.609918 containerd[1576]: time="2026-03-03T14:03:56.609859325Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.148970203s" Mar 3 14:03:56.609918 containerd[1576]: time="2026-03-03T14:03:56.609892400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 3 14:03:56.619068 containerd[1576]: time="2026-03-03T14:03:56.618820125Z" level=info msg="CreateContainer within sandbox \"8e45038501f036d0e5fd337a40abe4df6922ab7dbbdbce8fb418f0269ba26ce5\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 3 14:03:56.631685 containerd[1576]: time="2026-03-03T14:03:56.631373220Z" level=info msg="Container d49a447ed2dc02e850026f260639d2477d82a90eb189cc7f306162cc8beca7d2: CDI devices from CRI Config.CDIDevices: []" Mar 3 14:03:56.661896 containerd[1576]: time="2026-03-03T14:03:56.661503833Z" level=info msg="CreateContainer within sandbox \"8e45038501f036d0e5fd337a40abe4df6922ab7dbbdbce8fb418f0269ba26ce5\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d49a447ed2dc02e850026f260639d2477d82a90eb189cc7f306162cc8beca7d2\"" Mar 3 14:03:56.664698 containerd[1576]: time="2026-03-03T14:03:56.663246844Z" level=info msg="StartContainer for \"d49a447ed2dc02e850026f260639d2477d82a90eb189cc7f306162cc8beca7d2\"" Mar 3 14:03:56.668956 containerd[1576]: time="2026-03-03T14:03:56.668385227Z" level=info msg="connecting to shim d49a447ed2dc02e850026f260639d2477d82a90eb189cc7f306162cc8beca7d2" address="unix:///run/containerd/s/12d79802b83b36f25f3eaf6690da5a5c246db5c69fffc403bbc145fdd922b0a4" protocol=ttrpc version=3 Mar 3 14:03:56.717424 systemd[1]: Started cri-containerd-d49a447ed2dc02e850026f260639d2477d82a90eb189cc7f306162cc8beca7d2.scope - libcontainer container d49a447ed2dc02e850026f260639d2477d82a90eb189cc7f306162cc8beca7d2. Mar 3 14:03:56.902903 containerd[1576]: time="2026-03-03T14:03:56.902231872Z" level=info msg="StartContainer for \"d49a447ed2dc02e850026f260639d2477d82a90eb189cc7f306162cc8beca7d2\" returns successfully" Mar 3 14:03:57.524976 kubelet[2855]: I0303 14:03:57.524813 2855 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 3 14:03:57.524976 kubelet[2855]: I0303 14:03:57.524894 2855 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 3 14:03:57.851793 kubelet[2855]: I0303 14:03:57.850787 2855 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-xk9vk" podStartSLOduration=74.54386187 podStartE2EDuration="1m45.850769765s" podCreationTimestamp="2026-03-03 14:02:12 +0000 UTC" firstStartedPulling="2026-03-03 14:03:25.304400663 +0000 UTC m=+112.184406327" lastFinishedPulling="2026-03-03 14:03:56.611308558 +0000 UTC m=+143.491314222" observedRunningTime="2026-03-03 14:03:57.850418513 +0000 UTC m=+144.730424196" watchObservedRunningTime="2026-03-03 14:03:57.850769765 +0000 UTC m=+144.730775429" Mar 3 14:03:59.179336 systemd[1]: Started sshd@10-10.0.0.115:22-10.0.0.1:40620.service - OpenSSH per-connection server daemon (10.0.0.1:40620). Mar 3 14:03:59.364489 sshd[5514]: Accepted publickey for core from 10.0.0.1 port 40620 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 14:03:59.368246 sshd-session[5514]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:03:59.378352 systemd-logind[1548]: New session 11 of user core. Mar 3 14:03:59.389163 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 3 14:03:59.677204 sshd[5517]: Connection closed by 10.0.0.1 port 40620 Mar 3 14:03:59.688368 sshd-session[5514]: pam_unix(sshd:session): session closed for user core Mar 3 14:03:59.700945 systemd[1]: sshd@10-10.0.0.115:22-10.0.0.1:40620.service: Deactivated successfully. Mar 3 14:03:59.704564 systemd[1]: session-11.scope: Deactivated successfully. Mar 3 14:03:59.707973 systemd-logind[1548]: Session 11 logged out. Waiting for processes to exit. Mar 3 14:03:59.712129 systemd-logind[1548]: Removed session 11. Mar 3 14:04:04.703696 systemd[1]: Started sshd@11-10.0.0.115:22-10.0.0.1:40630.service - OpenSSH per-connection server daemon (10.0.0.1:40630). Mar 3 14:04:04.844506 sshd[5531]: Accepted publickey for core from 10.0.0.1 port 40630 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 14:04:04.847265 sshd-session[5531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:04:04.856281 systemd-logind[1548]: New session 12 of user core. Mar 3 14:04:04.865041 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 3 14:04:05.114069 sshd[5537]: Connection closed by 10.0.0.1 port 40630 Mar 3 14:04:05.115009 sshd-session[5531]: pam_unix(sshd:session): session closed for user core Mar 3 14:04:05.123547 systemd[1]: sshd@11-10.0.0.115:22-10.0.0.1:40630.service: Deactivated successfully. Mar 3 14:04:05.126522 systemd[1]: session-12.scope: Deactivated successfully. Mar 3 14:04:05.128474 systemd-logind[1548]: Session 12 logged out. Waiting for processes to exit. Mar 3 14:04:05.132167 systemd-logind[1548]: Removed session 12. Mar 3 14:04:06.206447 kubelet[2855]: E0303 14:04:06.194048 2855 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:04:11.214996 kubelet[2855]: E0303 14:04:11.205276 2855 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:04:11.240245 systemd[1]: Started sshd@12-10.0.0.115:22-10.0.0.1:48366.service - OpenSSH per-connection server daemon (10.0.0.1:48366). Mar 3 14:04:11.432745 sshd[5568]: Accepted publickey for core from 10.0.0.1 port 48366 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 14:04:11.441531 sshd-session[5568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:04:11.457345 systemd-logind[1548]: New session 13 of user core. Mar 3 14:04:11.466018 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 3 14:04:11.668356 sshd[5622]: Connection closed by 10.0.0.1 port 48366 Mar 3 14:04:11.669004 sshd-session[5568]: pam_unix(sshd:session): session closed for user core Mar 3 14:04:11.677334 systemd[1]: sshd@12-10.0.0.115:22-10.0.0.1:48366.service: Deactivated successfully. Mar 3 14:04:11.682070 systemd[1]: session-13.scope: Deactivated successfully. Mar 3 14:04:11.684857 systemd-logind[1548]: Session 13 logged out. Waiting for processes to exit. Mar 3 14:04:11.688112 systemd-logind[1548]: Removed session 13. Mar 3 14:04:13.182559 kubelet[2855]: E0303 14:04:13.182335 2855 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:04:16.703268 systemd[1]: Started sshd@13-10.0.0.115:22-10.0.0.1:48376.service - OpenSSH per-connection server daemon (10.0.0.1:48376). Mar 3 14:04:16.870760 sshd[5683]: Accepted publickey for core from 10.0.0.1 port 48376 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 14:04:16.874094 sshd-session[5683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:04:16.886909 systemd-logind[1548]: New session 14 of user core. Mar 3 14:04:16.898153 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 3 14:04:17.133021 sshd[5688]: Connection closed by 10.0.0.1 port 48376 Mar 3 14:04:17.133990 sshd-session[5683]: pam_unix(sshd:session): session closed for user core Mar 3 14:04:17.150912 systemd[1]: Started sshd@14-10.0.0.115:22-10.0.0.1:48380.service - OpenSSH per-connection server daemon (10.0.0.1:48380). Mar 3 14:04:17.152210 systemd[1]: sshd@13-10.0.0.115:22-10.0.0.1:48376.service: Deactivated successfully. Mar 3 14:04:17.156367 systemd[1]: session-14.scope: Deactivated successfully. Mar 3 14:04:17.157996 systemd-logind[1548]: Session 14 logged out. Waiting for processes to exit. Mar 3 14:04:17.164305 systemd-logind[1548]: Removed session 14. Mar 3 14:04:17.236651 sshd[5699]: Accepted publickey for core from 10.0.0.1 port 48380 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 14:04:17.239443 sshd-session[5699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:04:17.248261 systemd-logind[1548]: New session 15 of user core. Mar 3 14:04:17.261216 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 3 14:04:17.522927 sshd[5705]: Connection closed by 10.0.0.1 port 48380 Mar 3 14:04:17.523979 sshd-session[5699]: pam_unix(sshd:session): session closed for user core Mar 3 14:04:17.541012 systemd[1]: sshd@14-10.0.0.115:22-10.0.0.1:48380.service: Deactivated successfully. Mar 3 14:04:17.548119 systemd[1]: session-15.scope: Deactivated successfully. Mar 3 14:04:17.551993 systemd-logind[1548]: Session 15 logged out. Waiting for processes to exit. Mar 3 14:04:17.565403 systemd[1]: Started sshd@15-10.0.0.115:22-10.0.0.1:48392.service - OpenSSH per-connection server daemon (10.0.0.1:48392). Mar 3 14:04:17.570201 systemd-logind[1548]: Removed session 15. Mar 3 14:04:17.653427 sshd[5717]: Accepted publickey for core from 10.0.0.1 port 48392 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 14:04:17.656914 sshd-session[5717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:04:17.667435 systemd-logind[1548]: New session 16 of user core. Mar 3 14:04:17.680209 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 3 14:04:17.859361 sshd[5720]: Connection closed by 10.0.0.1 port 48392 Mar 3 14:04:17.861521 sshd-session[5717]: pam_unix(sshd:session): session closed for user core Mar 3 14:04:17.869383 systemd[1]: sshd@15-10.0.0.115:22-10.0.0.1:48392.service: Deactivated successfully. Mar 3 14:04:17.873467 systemd[1]: session-16.scope: Deactivated successfully. Mar 3 14:04:17.877980 systemd-logind[1548]: Session 16 logged out. Waiting for processes to exit. Mar 3 14:04:17.880893 systemd-logind[1548]: Removed session 16. Mar 3 14:04:21.188731 kubelet[2855]: E0303 14:04:21.187909 2855 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:04:22.911328 systemd[1]: Started sshd@16-10.0.0.115:22-10.0.0.1:59656.service - OpenSSH per-connection server daemon (10.0.0.1:59656). Mar 3 14:04:23.323385 sshd[5759]: Accepted publickey for core from 10.0.0.1 port 59656 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 14:04:23.341037 sshd-session[5759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:04:23.378538 systemd-logind[1548]: New session 17 of user core. Mar 3 14:04:23.405902 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 3 14:04:23.996213 sshd[5762]: Connection closed by 10.0.0.1 port 59656 Mar 3 14:04:23.997955 sshd-session[5759]: pam_unix(sshd:session): session closed for user core Mar 3 14:04:24.009288 systemd-logind[1548]: Session 17 logged out. Waiting for processes to exit. Mar 3 14:04:24.009491 systemd[1]: sshd@16-10.0.0.115:22-10.0.0.1:59656.service: Deactivated successfully. Mar 3 14:04:24.017299 systemd[1]: session-17.scope: Deactivated successfully. Mar 3 14:04:24.025545 systemd-logind[1548]: Removed session 17. Mar 3 14:04:29.012198 systemd[1]: Started sshd@17-10.0.0.115:22-10.0.0.1:59678.service - OpenSSH per-connection server daemon (10.0.0.1:59678). Mar 3 14:04:29.122934 sshd[5803]: Accepted publickey for core from 10.0.0.1 port 59678 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 14:04:29.125985 sshd-session[5803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:04:29.138171 systemd-logind[1548]: New session 18 of user core. Mar 3 14:04:29.146039 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 3 14:04:29.515892 sshd[5806]: Connection closed by 10.0.0.1 port 59678 Mar 3 14:04:29.516499 sshd-session[5803]: pam_unix(sshd:session): session closed for user core Mar 3 14:04:29.538077 systemd[1]: sshd@17-10.0.0.115:22-10.0.0.1:59678.service: Deactivated successfully. Mar 3 14:04:29.543389 systemd[1]: session-18.scope: Deactivated successfully. Mar 3 14:04:29.546467 systemd-logind[1548]: Session 18 logged out. Waiting for processes to exit. Mar 3 14:04:29.552408 systemd[1]: Started sshd@18-10.0.0.115:22-10.0.0.1:59692.service - OpenSSH per-connection server daemon (10.0.0.1:59692). Mar 3 14:04:29.555851 systemd-logind[1548]: Removed session 18. Mar 3 14:04:29.621277 sshd[5819]: Accepted publickey for core from 10.0.0.1 port 59692 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 14:04:29.624059 sshd-session[5819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:04:29.633535 systemd-logind[1548]: New session 19 of user core. Mar 3 14:04:29.651992 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 3 14:04:30.116015 sshd[5822]: Connection closed by 10.0.0.1 port 59692 Mar 3 14:04:30.117238 sshd-session[5819]: pam_unix(sshd:session): session closed for user core Mar 3 14:04:30.131333 systemd[1]: Started sshd@19-10.0.0.115:22-10.0.0.1:59702.service - OpenSSH per-connection server daemon (10.0.0.1:59702). Mar 3 14:04:30.142117 systemd[1]: sshd@18-10.0.0.115:22-10.0.0.1:59692.service: Deactivated successfully. Mar 3 14:04:30.148182 systemd[1]: session-19.scope: Deactivated successfully. Mar 3 14:04:30.153100 systemd-logind[1548]: Session 19 logged out. Waiting for processes to exit. Mar 3 14:04:30.158401 systemd-logind[1548]: Removed session 19. Mar 3 14:04:30.374223 sshd[5830]: Accepted publickey for core from 10.0.0.1 port 59702 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 14:04:30.378868 sshd-session[5830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:04:30.389348 systemd-logind[1548]: New session 20 of user core. Mar 3 14:04:30.398136 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 3 14:04:31.305239 sshd[5839]: Connection closed by 10.0.0.1 port 59702 Mar 3 14:04:31.306121 sshd-session[5830]: pam_unix(sshd:session): session closed for user core Mar 3 14:04:31.323290 systemd[1]: Started sshd@20-10.0.0.115:22-10.0.0.1:59706.service - OpenSSH per-connection server daemon (10.0.0.1:59706). Mar 3 14:04:31.325562 systemd[1]: sshd@19-10.0.0.115:22-10.0.0.1:59702.service: Deactivated successfully. Mar 3 14:04:31.338790 systemd[1]: session-20.scope: Deactivated successfully. Mar 3 14:04:31.346196 systemd-logind[1548]: Session 20 logged out. Waiting for processes to exit. Mar 3 14:04:31.356727 systemd-logind[1548]: Removed session 20. Mar 3 14:04:31.492873 sshd[5858]: Accepted publickey for core from 10.0.0.1 port 59706 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 14:04:31.498074 sshd-session[5858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:04:31.512322 systemd-logind[1548]: New session 21 of user core. Mar 3 14:04:31.520971 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 3 14:04:32.095951 sshd[5867]: Connection closed by 10.0.0.1 port 59706 Mar 3 14:04:32.098133 sshd-session[5858]: pam_unix(sshd:session): session closed for user core Mar 3 14:04:32.110813 systemd[1]: sshd@20-10.0.0.115:22-10.0.0.1:59706.service: Deactivated successfully. Mar 3 14:04:32.116461 systemd[1]: session-21.scope: Deactivated successfully. Mar 3 14:04:32.119654 systemd-logind[1548]: Session 21 logged out. Waiting for processes to exit. Mar 3 14:04:32.124897 systemd[1]: Started sshd@21-10.0.0.115:22-10.0.0.1:59712.service - OpenSSH per-connection server daemon (10.0.0.1:59712). Mar 3 14:04:32.134946 systemd-logind[1548]: Removed session 21. Mar 3 14:04:32.228896 sshd[5879]: Accepted publickey for core from 10.0.0.1 port 59712 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 14:04:32.236790 sshd-session[5879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:04:32.269901 systemd-logind[1548]: New session 22 of user core. Mar 3 14:04:32.280943 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 3 14:04:32.701787 sshd[5882]: Connection closed by 10.0.0.1 port 59712 Mar 3 14:04:32.705869 sshd-session[5879]: pam_unix(sshd:session): session closed for user core Mar 3 14:04:32.735284 systemd[1]: sshd@21-10.0.0.115:22-10.0.0.1:59712.service: Deactivated successfully. Mar 3 14:04:32.747243 systemd[1]: session-22.scope: Deactivated successfully. Mar 3 14:04:32.762108 systemd-logind[1548]: Session 22 logged out. Waiting for processes to exit. Mar 3 14:04:32.770971 systemd-logind[1548]: Removed session 22. Mar 3 14:04:37.725394 systemd[1]: Started sshd@22-10.0.0.115:22-10.0.0.1:59718.service - OpenSSH per-connection server daemon (10.0.0.1:59718). Mar 3 14:04:37.801432 sshd[5900]: Accepted publickey for core from 10.0.0.1 port 59718 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 14:04:37.804434 sshd-session[5900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:04:37.815367 systemd-logind[1548]: New session 23 of user core. Mar 3 14:04:37.822395 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 3 14:04:38.008402 sshd[5903]: Connection closed by 10.0.0.1 port 59718 Mar 3 14:04:38.009231 sshd-session[5900]: pam_unix(sshd:session): session closed for user core Mar 3 14:04:38.018008 systemd[1]: sshd@22-10.0.0.115:22-10.0.0.1:59718.service: Deactivated successfully. Mar 3 14:04:38.022388 systemd[1]: session-23.scope: Deactivated successfully. Mar 3 14:04:38.028395 systemd-logind[1548]: Session 23 logged out. Waiting for processes to exit. Mar 3 14:04:38.033001 systemd-logind[1548]: Removed session 23. Mar 3 14:04:43.043111 systemd[1]: Started sshd@23-10.0.0.115:22-10.0.0.1:52424.service - OpenSSH per-connection server daemon (10.0.0.1:52424). Mar 3 14:04:43.141128 sshd[5918]: Accepted publickey for core from 10.0.0.1 port 52424 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 14:04:43.143982 sshd-session[5918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:04:43.155820 systemd-logind[1548]: New session 24 of user core. Mar 3 14:04:43.166027 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 3 14:04:43.358497 sshd[5921]: Connection closed by 10.0.0.1 port 52424 Mar 3 14:04:43.360924 sshd-session[5918]: pam_unix(sshd:session): session closed for user core Mar 3 14:04:43.370974 systemd[1]: sshd@23-10.0.0.115:22-10.0.0.1:52424.service: Deactivated successfully. Mar 3 14:04:43.376024 systemd[1]: session-24.scope: Deactivated successfully. Mar 3 14:04:43.380943 systemd-logind[1548]: Session 24 logged out. Waiting for processes to exit. Mar 3 14:04:43.384486 systemd-logind[1548]: Removed session 24. Mar 3 14:04:48.393555 systemd[1]: Started sshd@24-10.0.0.115:22-10.0.0.1:52438.service - OpenSSH per-connection server daemon (10.0.0.1:52438). Mar 3 14:04:48.556556 sshd[5984]: Accepted publickey for core from 10.0.0.1 port 52438 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 14:04:48.561496 sshd-session[5984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:04:48.605144 systemd-logind[1548]: New session 25 of user core. Mar 3 14:04:48.627498 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 3 14:04:49.185837 sshd[5987]: Connection closed by 10.0.0.1 port 52438 Mar 3 14:04:49.185740 sshd-session[5984]: pam_unix(sshd:session): session closed for user core Mar 3 14:04:49.207230 systemd[1]: sshd@24-10.0.0.115:22-10.0.0.1:52438.service: Deactivated successfully. Mar 3 14:04:49.225413 systemd[1]: session-25.scope: Deactivated successfully. Mar 3 14:04:49.233944 systemd-logind[1548]: Session 25 logged out. Waiting for processes to exit. Mar 3 14:04:49.247991 systemd-logind[1548]: Removed session 25. Mar 3 14:04:54.208043 systemd[1]: Started sshd@25-10.0.0.115:22-10.0.0.1:51942.service - OpenSSH per-connection server daemon (10.0.0.1:51942). Mar 3 14:04:54.454942 sshd[6034]: Accepted publickey for core from 10.0.0.1 port 51942 ssh2: RSA SHA256:MREuPPXaZkIEMIoke3bDsmEmgOBlUzy9TvoL75x3JlI Mar 3 14:04:54.459335 sshd-session[6034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:04:54.474411 systemd-logind[1548]: New session 26 of user core. Mar 3 14:04:54.486789 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 3 14:04:54.958739 sshd[6054]: Connection closed by 10.0.0.1 port 51942 Mar 3 14:04:54.959449 sshd-session[6034]: pam_unix(sshd:session): session closed for user core Mar 3 14:04:54.977432 systemd[1]: sshd@25-10.0.0.115:22-10.0.0.1:51942.service: Deactivated successfully. Mar 3 14:04:54.978066 systemd-logind[1548]: Session 26 logged out. Waiting for processes to exit. Mar 3 14:04:54.984387 systemd[1]: session-26.scope: Deactivated successfully. Mar 3 14:04:54.989126 systemd-logind[1548]: Removed session 26. Mar 3 14:04:55.181779 kubelet[2855]: E0303 14:04:55.181525 2855 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:04:56.189511 kubelet[2855]: E0303 14:04:56.189195 2855 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"