Mar 2 14:25:25.446077 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 2 10:28:24 -00 2026 Mar 2 14:25:25.446110 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=82731586f036a8515942386c762f58de23efa7b4e7ecf4198e267e112154cbc2 Mar 2 14:25:25.446121 kernel: BIOS-provided physical RAM map: Mar 2 14:25:25.446133 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 2 14:25:25.446140 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 2 14:25:25.446149 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 2 14:25:25.446160 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Mar 2 14:25:25.446171 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 2 14:25:25.446179 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Mar 2 14:25:25.447426 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Mar 2 14:25:25.447445 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Mar 2 14:25:25.447457 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Mar 2 14:25:25.447472 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Mar 2 14:25:25.447481 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Mar 2 14:25:25.447492 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Mar 2 14:25:25.447501 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 2 14:25:25.447511 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Mar 2 14:25:25.447523 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Mar 2 14:25:25.447532 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Mar 2 14:25:25.447541 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Mar 2 14:25:25.447551 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Mar 2 14:25:25.447560 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 2 14:25:25.447571 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 2 14:25:25.447580 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 2 14:25:25.447589 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Mar 2 14:25:25.447600 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 2 14:25:25.447610 kernel: NX (Execute Disable) protection: active Mar 2 14:25:25.447620 kernel: APIC: Static calls initialized Mar 2 14:25:25.447635 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Mar 2 14:25:25.447645 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Mar 2 14:25:25.447656 kernel: extended physical RAM map: Mar 2 14:25:25.447666 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 2 14:25:25.447677 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 2 14:25:25.447686 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 2 14:25:25.447696 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Mar 2 14:25:25.447706 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 2 14:25:25.447716 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Mar 2 14:25:25.447725 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Mar 2 14:25:25.447735 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Mar 2 14:25:25.447748 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Mar 2 14:25:25.447763 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Mar 2 14:25:25.447775 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Mar 2 14:25:25.447787 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Mar 2 14:25:25.447797 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Mar 2 14:25:25.447812 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Mar 2 14:25:25.447824 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Mar 2 14:25:25.447836 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Mar 2 14:25:25.447847 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 2 14:25:25.447859 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Mar 2 14:25:25.447870 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Mar 2 14:25:25.447882 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Mar 2 14:25:25.447892 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Mar 2 14:25:25.447904 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Mar 2 14:25:25.447916 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 2 14:25:25.447927 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 2 14:25:25.447943 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 2 14:25:25.447955 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Mar 2 14:25:25.447966 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 2 14:25:25.447978 kernel: efi: EFI v2.7 by EDK II Mar 2 14:25:25.447989 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Mar 2 14:25:25.448001 kernel: random: crng init done Mar 2 14:25:25.448012 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Mar 2 14:25:25.448023 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Mar 2 14:25:25.448034 kernel: secureboot: Secure boot disabled Mar 2 14:25:25.448044 kernel: SMBIOS 2.8 present. Mar 2 14:25:25.448055 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Mar 2 14:25:25.448070 kernel: DMI: Memory slots populated: 1/1 Mar 2 14:25:25.448081 kernel: Hypervisor detected: KVM Mar 2 14:25:25.448093 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Mar 2 14:25:25.448104 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 2 14:25:25.448114 kernel: kvm-clock: using sched offset of 26052935904 cycles Mar 2 14:25:25.448127 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 2 14:25:25.448138 kernel: tsc: Detected 2445.426 MHz processor Mar 2 14:25:25.448151 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 2 14:25:25.448162 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 2 14:25:25.448173 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Mar 2 14:25:25.448184 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 2 14:25:25.448269 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 2 14:25:25.448280 kernel: Using GB pages for direct mapping Mar 2 14:25:25.448290 kernel: ACPI: Early table checksum verification disabled Mar 2 14:25:25.448389 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Mar 2 14:25:25.448401 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 2 14:25:25.448412 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 14:25:25.448423 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 14:25:25.448434 kernel: ACPI: FACS 0x000000009CBDD000 000040 Mar 2 14:25:25.448448 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 14:25:25.448459 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 14:25:25.448469 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 14:25:25.448480 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 14:25:25.448491 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 2 14:25:25.448501 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Mar 2 14:25:25.448513 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Mar 2 14:25:25.448525 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Mar 2 14:25:25.448534 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Mar 2 14:25:25.448550 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Mar 2 14:25:25.448561 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Mar 2 14:25:25.448572 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Mar 2 14:25:25.448583 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Mar 2 14:25:25.448594 kernel: No NUMA configuration found Mar 2 14:25:25.448604 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Mar 2 14:25:25.448615 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Mar 2 14:25:25.448627 kernel: Zone ranges: Mar 2 14:25:25.448637 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 2 14:25:25.448651 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Mar 2 14:25:25.448662 kernel: Normal empty Mar 2 14:25:25.448672 kernel: Device empty Mar 2 14:25:25.448683 kernel: Movable zone start for each node Mar 2 14:25:25.448693 kernel: Early memory node ranges Mar 2 14:25:25.448704 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 2 14:25:25.448714 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Mar 2 14:25:25.448725 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Mar 2 14:25:25.448735 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Mar 2 14:25:25.448749 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Mar 2 14:25:25.448760 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Mar 2 14:25:25.448770 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Mar 2 14:25:25.448781 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Mar 2 14:25:25.448792 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Mar 2 14:25:25.448803 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 2 14:25:25.448823 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 2 14:25:25.448838 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Mar 2 14:25:25.448849 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 2 14:25:25.448860 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Mar 2 14:25:25.448871 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Mar 2 14:25:25.448882 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Mar 2 14:25:25.448898 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Mar 2 14:25:25.448909 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Mar 2 14:25:25.448921 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 2 14:25:25.448932 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 2 14:25:25.448943 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 2 14:25:25.448957 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 2 14:25:25.448969 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 2 14:25:25.448981 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 2 14:25:25.448992 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 2 14:25:25.449003 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 2 14:25:25.449014 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 2 14:25:25.449026 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 2 14:25:25.449038 kernel: TSC deadline timer available Mar 2 14:25:25.449050 kernel: CPU topo: Max. logical packages: 1 Mar 2 14:25:25.449066 kernel: CPU topo: Max. logical dies: 1 Mar 2 14:25:25.449078 kernel: CPU topo: Max. dies per package: 1 Mar 2 14:25:25.449090 kernel: CPU topo: Max. threads per core: 1 Mar 2 14:25:25.449103 kernel: CPU topo: Num. cores per package: 4 Mar 2 14:25:25.449114 kernel: CPU topo: Num. threads per package: 4 Mar 2 14:25:25.449127 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Mar 2 14:25:25.449139 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 2 14:25:25.449151 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 2 14:25:25.449162 kernel: kvm-guest: setup PV sched yield Mar 2 14:25:25.449174 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Mar 2 14:25:25.449866 kernel: Booting paravirtualized kernel on KVM Mar 2 14:25:25.449883 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 2 14:25:25.449896 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 2 14:25:25.449908 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Mar 2 14:25:25.449921 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Mar 2 14:25:25.449931 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 2 14:25:25.449942 kernel: kvm-guest: PV spinlocks enabled Mar 2 14:25:25.449953 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 2 14:25:25.449966 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=82731586f036a8515942386c762f58de23efa7b4e7ecf4198e267e112154cbc2 Mar 2 14:25:25.449983 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 2 14:25:25.449996 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 2 14:25:25.450009 kernel: Fallback order for Node 0: 0 Mar 2 14:25:25.450021 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Mar 2 14:25:25.450032 kernel: Policy zone: DMA32 Mar 2 14:25:25.450044 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 2 14:25:25.451485 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 2 14:25:25.451506 kernel: ftrace: allocating 40099 entries in 157 pages Mar 2 14:25:25.451524 kernel: ftrace: allocated 157 pages with 5 groups Mar 2 14:25:25.451535 kernel: Dynamic Preempt: voluntary Mar 2 14:25:25.451546 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 2 14:25:25.451558 kernel: rcu: RCU event tracing is enabled. Mar 2 14:25:25.451570 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 2 14:25:25.451582 kernel: Trampoline variant of Tasks RCU enabled. Mar 2 14:25:25.451593 kernel: Rude variant of Tasks RCU enabled. Mar 2 14:25:25.451604 kernel: Tracing variant of Tasks RCU enabled. Mar 2 14:25:25.451615 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 2 14:25:25.451630 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 2 14:25:25.451641 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 14:25:25.451652 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 14:25:25.451664 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 14:25:25.451988 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 2 14:25:25.452001 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 2 14:25:25.452013 kernel: Console: colour dummy device 80x25 Mar 2 14:25:25.452023 kernel: printk: legacy console [ttyS0] enabled Mar 2 14:25:25.452034 kernel: ACPI: Core revision 20240827 Mar 2 14:25:25.452052 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 2 14:25:25.452063 kernel: APIC: Switch to symmetric I/O mode setup Mar 2 14:25:25.452076 kernel: x2apic enabled Mar 2 14:25:25.452089 kernel: APIC: Switched APIC routing to: physical x2apic Mar 2 14:25:25.452101 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 2 14:25:25.452113 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 2 14:25:25.452125 kernel: kvm-guest: setup PV IPIs Mar 2 14:25:25.452137 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 2 14:25:25.452148 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 2 14:25:25.452164 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 2 14:25:25.452176 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 2 14:25:25.452251 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 2 14:25:25.452265 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 2 14:25:25.452277 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 2 14:25:25.452290 kernel: Spectre V2 : Mitigation: Retpolines Mar 2 14:25:25.452656 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 2 14:25:25.452674 kernel: Speculative Store Bypass: Vulnerable Mar 2 14:25:25.452687 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 2 14:25:25.452706 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 2 14:25:25.452717 kernel: active return thunk: srso_alias_return_thunk Mar 2 14:25:25.452728 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 2 14:25:25.452738 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 2 14:25:25.452748 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 2 14:25:25.452758 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 2 14:25:25.452768 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 2 14:25:25.452778 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 2 14:25:25.452791 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 2 14:25:25.452802 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 2 14:25:25.452815 kernel: Freeing SMP alternatives memory: 32K Mar 2 14:25:25.452826 kernel: pid_max: default: 32768 minimum: 301 Mar 2 14:25:25.452836 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 2 14:25:25.452847 kernel: landlock: Up and running. Mar 2 14:25:25.452857 kernel: SELinux: Initializing. Mar 2 14:25:25.452867 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 2 14:25:25.452878 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 2 14:25:25.452891 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 2 14:25:25.452901 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 2 14:25:25.452911 kernel: signal: max sigframe size: 1776 Mar 2 14:25:25.452922 kernel: rcu: Hierarchical SRCU implementation. Mar 2 14:25:25.452932 kernel: rcu: Max phase no-delay instances is 400. Mar 2 14:25:25.452943 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 2 14:25:25.452953 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 2 14:25:25.452963 kernel: smp: Bringing up secondary CPUs ... Mar 2 14:25:25.452973 kernel: smpboot: x86: Booting SMP configuration: Mar 2 14:25:25.452986 kernel: .... node #0, CPUs: #1 #2 #3 Mar 2 14:25:25.452998 kernel: smp: Brought up 1 node, 4 CPUs Mar 2 14:25:25.453009 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 2 14:25:25.453021 kernel: Memory: 2414476K/2565800K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46192K init, 2568K bss, 145388K reserved, 0K cma-reserved) Mar 2 14:25:25.453031 kernel: devtmpfs: initialized Mar 2 14:25:25.453041 kernel: x86/mm: Memory block size: 128MB Mar 2 14:25:25.453051 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Mar 2 14:25:25.453061 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Mar 2 14:25:25.453073 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Mar 2 14:25:25.453090 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Mar 2 14:25:25.453102 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Mar 2 14:25:25.453115 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Mar 2 14:25:25.453128 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 2 14:25:25.453139 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 2 14:25:25.453151 kernel: pinctrl core: initialized pinctrl subsystem Mar 2 14:25:25.453163 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 2 14:25:25.453176 kernel: audit: initializing netlink subsys (disabled) Mar 2 14:25:25.453250 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 2 14:25:25.453271 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 2 14:25:25.453283 kernel: audit: type=2000 audit(1772461488.457:1): state=initialized audit_enabled=0 res=1 Mar 2 14:25:25.453295 kernel: cpuidle: using governor menu Mar 2 14:25:25.453409 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 2 14:25:25.453421 kernel: dca service started, version 1.12.1 Mar 2 14:25:25.453431 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Mar 2 14:25:25.453442 kernel: PCI: Using configuration type 1 for base access Mar 2 14:25:25.453452 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 2 14:25:25.453466 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 2 14:25:25.453476 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 2 14:25:25.453488 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 2 14:25:25.453500 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 2 14:25:25.453510 kernel: ACPI: Added _OSI(Module Device) Mar 2 14:25:25.453521 kernel: ACPI: Added _OSI(Processor Device) Mar 2 14:25:25.453534 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 2 14:25:25.453544 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 2 14:25:25.453555 kernel: ACPI: Interpreter enabled Mar 2 14:25:25.453565 kernel: ACPI: PM: (supports S0 S3 S5) Mar 2 14:25:25.453579 kernel: ACPI: Using IOAPIC for interrupt routing Mar 2 14:25:25.453589 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 2 14:25:25.453600 kernel: PCI: Using E820 reservations for host bridge windows Mar 2 14:25:25.453610 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 2 14:25:25.453620 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 2 14:25:25.453858 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 2 14:25:25.454037 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 2 14:25:25.455612 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 2 14:25:25.455631 kernel: PCI host bridge to bus 0000:00 Mar 2 14:25:25.455805 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 2 14:25:25.455969 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 2 14:25:25.456147 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 2 14:25:25.456596 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Mar 2 14:25:25.456775 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Mar 2 14:25:25.456941 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Mar 2 14:25:25.457100 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 2 14:25:25.457498 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Mar 2 14:25:25.458296 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Mar 2 14:25:25.458565 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Mar 2 14:25:25.458731 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Mar 2 14:25:25.458917 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Mar 2 14:25:25.459107 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 2 14:25:25.459496 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 36132 usecs Mar 2 14:25:25.459698 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Mar 2 14:25:25.459890 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Mar 2 14:25:25.460093 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Mar 2 14:25:25.460480 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Mar 2 14:25:25.460674 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Mar 2 14:25:25.460837 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Mar 2 14:25:25.460995 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Mar 2 14:25:25.461168 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Mar 2 14:25:25.461904 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Mar 2 14:25:25.462101 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Mar 2 14:25:25.462481 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Mar 2 14:25:25.462654 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Mar 2 14:25:25.462813 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Mar 2 14:25:25.462991 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Mar 2 14:25:25.463261 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 2 14:25:25.463553 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 11718 usecs Mar 2 14:25:25.463743 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Mar 2 14:25:25.463912 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Mar 2 14:25:25.464073 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Mar 2 14:25:25.464628 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Mar 2 14:25:25.464798 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Mar 2 14:25:25.464818 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 2 14:25:25.464832 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 2 14:25:25.464844 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 2 14:25:25.464856 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 2 14:25:25.464873 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 2 14:25:25.464884 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 2 14:25:25.464894 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 2 14:25:25.464905 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 2 14:25:25.464915 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 2 14:25:25.464926 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 2 14:25:25.464937 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 2 14:25:25.464947 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 2 14:25:25.464958 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 2 14:25:25.464972 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 2 14:25:25.464983 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 2 14:25:25.464995 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 2 14:25:25.465006 kernel: iommu: Default domain type: Translated Mar 2 14:25:25.465017 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 2 14:25:25.465029 kernel: efivars: Registered efivars operations Mar 2 14:25:25.465040 kernel: PCI: Using ACPI for IRQ routing Mar 2 14:25:25.465052 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 2 14:25:25.465063 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Mar 2 14:25:25.465078 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Mar 2 14:25:25.465092 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Mar 2 14:25:25.465103 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Mar 2 14:25:25.465114 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Mar 2 14:25:25.465126 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Mar 2 14:25:25.465138 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Mar 2 14:25:25.465151 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Mar 2 14:25:25.465488 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 2 14:25:25.465667 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 2 14:25:25.465829 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 2 14:25:25.465851 kernel: vgaarb: loaded Mar 2 14:25:25.465863 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 2 14:25:25.465875 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 2 14:25:25.465886 kernel: clocksource: Switched to clocksource kvm-clock Mar 2 14:25:25.465898 kernel: VFS: Disk quotas dquot_6.6.0 Mar 2 14:25:25.465909 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 2 14:25:25.465921 kernel: pnp: PnP ACPI init Mar 2 14:25:25.466100 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Mar 2 14:25:25.466125 kernel: pnp: PnP ACPI: found 6 devices Mar 2 14:25:25.466138 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 2 14:25:25.466149 kernel: NET: Registered PF_INET protocol family Mar 2 14:25:25.466161 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 2 14:25:25.466172 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 2 14:25:25.466269 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 2 14:25:25.466285 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 2 14:25:25.466297 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 2 14:25:25.466394 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 2 14:25:25.466411 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 2 14:25:25.466423 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 2 14:25:25.466434 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 2 14:25:25.466445 kernel: NET: Registered PF_XDP protocol family Mar 2 14:25:25.466625 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Mar 2 14:25:25.466817 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Mar 2 14:25:25.466995 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 2 14:25:25.467161 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 2 14:25:25.467585 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 2 14:25:25.467747 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Mar 2 14:25:25.467911 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Mar 2 14:25:25.468069 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Mar 2 14:25:25.468085 kernel: PCI: CLS 0 bytes, default 64 Mar 2 14:25:25.468101 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 2 14:25:25.468113 kernel: Initialise system trusted keyrings Mar 2 14:25:25.468127 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 2 14:25:25.468144 kernel: Key type asymmetric registered Mar 2 14:25:25.468155 kernel: Asymmetric key parser 'x509' registered Mar 2 14:25:25.468165 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 2 14:25:25.468176 kernel: io scheduler mq-deadline registered Mar 2 14:25:25.468560 kernel: io scheduler kyber registered Mar 2 14:25:25.468576 kernel: io scheduler bfq registered Mar 2 14:25:25.468587 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 2 14:25:25.468599 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 2 14:25:25.468614 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 2 14:25:25.468626 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 2 14:25:25.468637 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 2 14:25:25.468649 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 2 14:25:25.468663 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 2 14:25:25.468673 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 2 14:25:25.468686 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 2 14:25:25.468879 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 2 14:25:25.468898 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Mar 2 14:25:25.469112 kernel: rtc_cmos 00:04: registered as rtc0 Mar 2 14:25:25.469468 kernel: rtc_cmos 00:04: setting system clock to 2026-03-02T14:25:22 UTC (1772461522) Mar 2 14:25:25.469633 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Mar 2 14:25:25.469652 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 2 14:25:25.469666 kernel: efifb: probing for efifb Mar 2 14:25:25.469679 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Mar 2 14:25:25.469697 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Mar 2 14:25:25.469708 kernel: efifb: scrolling: redraw Mar 2 14:25:25.469720 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 2 14:25:25.469734 kernel: Console: switching to colour frame buffer device 160x50 Mar 2 14:25:25.469745 kernel: fb0: EFI VGA frame buffer device Mar 2 14:25:25.469756 kernel: pstore: Using crash dump compression: deflate Mar 2 14:25:25.469767 kernel: pstore: Registered efi_pstore as persistent store backend Mar 2 14:25:25.469777 kernel: NET: Registered PF_INET6 protocol family Mar 2 14:25:25.469788 kernel: Segment Routing with IPv6 Mar 2 14:25:25.469802 kernel: In-situ OAM (IOAM) with IPv6 Mar 2 14:25:25.469813 kernel: NET: Registered PF_PACKET protocol family Mar 2 14:25:25.469823 kernel: Key type dns_resolver registered Mar 2 14:25:25.469836 kernel: IPI shorthand broadcast: enabled Mar 2 14:25:25.469848 kernel: sched_clock: Marking stable (18732114013, 12808091719)->(37416158022, -5875952290) Mar 2 14:25:25.469861 kernel: registered taskstats version 1 Mar 2 14:25:25.469874 kernel: Loading compiled-in X.509 certificates Mar 2 14:25:25.469889 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: ca052fea375a75b056ebd4154b64794dffb70b96' Mar 2 14:25:25.469900 kernel: Demotion targets for Node 0: null Mar 2 14:25:25.469913 kernel: Key type .fscrypt registered Mar 2 14:25:25.469924 kernel: Key type fscrypt-provisioning registered Mar 2 14:25:25.469935 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 2 14:25:25.469946 kernel: ima: Allocated hash algorithm: sha1 Mar 2 14:25:25.469956 kernel: ima: No architecture policies found Mar 2 14:25:25.469968 kernel: clk: Disabling unused clocks Mar 2 14:25:25.469980 kernel: Warning: unable to open an initial console. Mar 2 14:25:25.469991 kernel: Freeing unused kernel image (initmem) memory: 46192K Mar 2 14:25:25.470002 kernel: Write protecting the kernel read-only data: 40960k Mar 2 14:25:25.470016 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Mar 2 14:25:25.470026 kernel: Run /init as init process Mar 2 14:25:25.470037 kernel: with arguments: Mar 2 14:25:25.470048 kernel: /init Mar 2 14:25:25.470058 kernel: with environment: Mar 2 14:25:25.470068 kernel: HOME=/ Mar 2 14:25:25.470079 kernel: TERM=linux Mar 2 14:25:25.470093 systemd[1]: Successfully made /usr/ read-only. Mar 2 14:25:25.470114 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 2 14:25:25.470129 systemd[1]: Detected virtualization kvm. Mar 2 14:25:25.470140 systemd[1]: Detected architecture x86-64. Mar 2 14:25:25.470151 systemd[1]: Running in initrd. Mar 2 14:25:25.470162 systemd[1]: No hostname configured, using default hostname. Mar 2 14:25:25.470173 systemd[1]: Hostname set to . Mar 2 14:25:25.470184 systemd[1]: Initializing machine ID from VM UUID. Mar 2 14:25:25.470255 systemd[1]: Queued start job for default target initrd.target. Mar 2 14:25:25.470271 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 14:25:25.470282 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 14:25:25.470294 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 2 14:25:25.470387 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 2 14:25:25.470399 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 2 14:25:25.470411 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 2 14:25:25.470427 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 2 14:25:25.470439 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 2 14:25:25.470450 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 14:25:25.470461 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 2 14:25:25.470472 systemd[1]: Reached target paths.target - Path Units. Mar 2 14:25:25.470484 systemd[1]: Reached target slices.target - Slice Units. Mar 2 14:25:25.470495 systemd[1]: Reached target swap.target - Swaps. Mar 2 14:25:25.470506 systemd[1]: Reached target timers.target - Timer Units. Mar 2 14:25:25.470517 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 2 14:25:25.470531 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 2 14:25:25.470543 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 2 14:25:25.470555 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 2 14:25:25.470568 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 2 14:25:25.470580 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 2 14:25:25.470593 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 14:25:25.470608 systemd[1]: Reached target sockets.target - Socket Units. Mar 2 14:25:25.470622 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 2 14:25:25.470640 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 2 14:25:25.470653 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 2 14:25:25.470667 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 2 14:25:25.470680 systemd[1]: Starting systemd-fsck-usr.service... Mar 2 14:25:25.470691 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 2 14:25:25.470703 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 2 14:25:25.470715 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 14:25:25.470727 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 2 14:25:25.470784 systemd-journald[204]: Collecting audit messages is disabled. Mar 2 14:25:25.470824 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 14:25:25.470838 systemd[1]: Finished systemd-fsck-usr.service. Mar 2 14:25:25.470852 systemd-journald[204]: Journal started Mar 2 14:25:25.470876 systemd-journald[204]: Runtime Journal (/run/log/journal/a5719e7e75bf4cec95fa3ab063f50be4) is 6M, max 48.1M, 42.1M free. Mar 2 14:25:25.491541 systemd[1]: Started systemd-journald.service - Journal Service. Mar 2 14:25:25.497900 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 2 14:25:25.504548 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 2 14:25:25.546166 systemd-modules-load[205]: Inserted module 'overlay' Mar 2 14:25:25.586955 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 14:25:25.595106 systemd-tmpfiles[215]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 2 14:25:25.634975 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 14:25:25.651555 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 14:25:25.688150 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 14:25:25.728536 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 2 14:25:25.817894 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 14:25:25.877676 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 2 14:25:25.879718 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 14:25:25.922805 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 2 14:25:25.942878 kernel: Bridge firewalling registered Mar 2 14:25:25.944173 systemd-modules-load[205]: Inserted module 'br_netfilter' Mar 2 14:25:25.957975 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 2 14:25:25.977564 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 14:25:26.027114 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 14:25:26.053570 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 2 14:25:26.080681 dracut-cmdline[239]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=82731586f036a8515942386c762f58de23efa7b4e7ecf4198e267e112154cbc2 Mar 2 14:25:26.264853 systemd-resolved[260]: Positive Trust Anchors: Mar 2 14:25:26.264930 systemd-resolved[260]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 2 14:25:26.264974 systemd-resolved[260]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 2 14:25:26.272669 systemd-resolved[260]: Defaulting to hostname 'linux'. Mar 2 14:25:26.276179 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 2 14:25:26.292537 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 2 14:25:26.479546 kernel: SCSI subsystem initialized Mar 2 14:25:26.510185 kernel: Loading iSCSI transport class v2.0-870. Mar 2 14:25:26.548917 kernel: iscsi: registered transport (tcp) Mar 2 14:25:26.599496 kernel: iscsi: registered transport (qla4xxx) Mar 2 14:25:26.599587 kernel: QLogic iSCSI HBA Driver Mar 2 14:25:26.671881 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 2 14:25:26.739293 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 2 14:25:26.755925 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 2 14:25:26.920719 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 2 14:25:26.933529 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 2 14:25:27.112534 kernel: raid6: avx2x4 gen() 18197 MB/s Mar 2 14:25:27.133556 kernel: raid6: avx2x2 gen() 17143 MB/s Mar 2 14:25:27.156833 kernel: raid6: avx2x1 gen() 11207 MB/s Mar 2 14:25:27.156901 kernel: raid6: using algorithm avx2x4 gen() 18197 MB/s Mar 2 14:25:27.182938 kernel: raid6: .... xor() 3073 MB/s, rmw enabled Mar 2 14:25:27.183014 kernel: raid6: using avx2x2 recovery algorithm Mar 2 14:25:27.224900 kernel: xor: automatically using best checksumming function avx Mar 2 14:25:27.976705 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 2 14:25:28.012429 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 2 14:25:28.036840 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 14:25:28.131195 systemd-udevd[457]: Using default interface naming scheme 'v255'. Mar 2 14:25:28.161296 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 14:25:28.205972 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 2 14:25:28.291481 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Mar 2 14:25:28.398050 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 2 14:25:28.428852 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 2 14:25:28.652107 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 14:25:28.684853 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 2 14:25:28.914592 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 14:25:28.964010 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 2 14:25:28.964475 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 2 14:25:28.914870 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 14:25:29.013763 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 2 14:25:29.013798 kernel: GPT:9289727 != 19775487 Mar 2 14:25:29.013813 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 2 14:25:29.013826 kernel: GPT:9289727 != 19775487 Mar 2 14:25:29.013839 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 2 14:25:29.013856 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 14:25:28.943138 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 14:25:29.012185 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 14:25:29.032047 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 2 14:25:29.149641 kernel: cryptd: max_cpu_qlen set to 1000 Mar 2 14:25:29.149702 kernel: libata version 3.00 loaded. Mar 2 14:25:29.173140 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 14:25:29.256429 kernel: AES CTR mode by8 optimization enabled Mar 2 14:25:29.305484 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 2 14:25:29.344440 kernel: ahci 0000:00:1f.2: version 3.0 Mar 2 14:25:29.350025 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 2 14:25:29.377136 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 2 14:25:29.389078 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Mar 2 14:25:29.389551 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Mar 2 14:25:29.389774 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 2 14:25:29.405655 kernel: scsi host0: ahci Mar 2 14:25:29.414653 kernel: scsi host1: ahci Mar 2 14:25:29.426476 kernel: scsi host2: ahci Mar 2 14:25:29.434431 kernel: scsi host3: ahci Mar 2 14:25:29.442441 kernel: scsi host4: ahci Mar 2 14:25:29.464493 kernel: scsi host5: ahci Mar 2 14:25:29.464828 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Mar 2 14:25:29.464862 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Mar 2 14:25:29.465002 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 2 14:25:29.564800 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Mar 2 14:25:29.564846 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Mar 2 14:25:29.564863 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Mar 2 14:25:29.564879 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Mar 2 14:25:29.560541 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 2 14:25:29.603073 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 2 14:25:29.632713 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 2 14:25:29.707592 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 2 14:25:29.779864 disk-uuid[621]: Primary Header is updated. Mar 2 14:25:29.779864 disk-uuid[621]: Secondary Entries is updated. Mar 2 14:25:29.779864 disk-uuid[621]: Secondary Header is updated. Mar 2 14:25:29.824990 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 14:25:29.862742 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 2 14:25:29.886519 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 2 14:25:29.907671 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 2 14:25:29.925942 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 2 14:25:29.942936 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 2 14:25:29.961439 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 2 14:25:29.983023 kernel: ata3.00: LPM support broken, forcing max_power Mar 2 14:25:29.983096 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 2 14:25:29.983114 kernel: ata3.00: applying bridge limits Mar 2 14:25:30.020573 kernel: ata3.00: LPM support broken, forcing max_power Mar 2 14:25:30.020640 kernel: ata3.00: configured for UDMA/100 Mar 2 14:25:30.037908 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 2 14:25:30.185999 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 2 14:25:30.186606 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 2 14:25:30.226468 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 2 14:25:30.869484 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 2 14:25:30.919837 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 14:25:30.919868 disk-uuid[623]: The operation has completed successfully. Mar 2 14:25:30.948056 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 2 14:25:30.962198 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 14:25:30.962508 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 2 14:25:31.053218 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 2 14:25:31.069838 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 2 14:25:31.070116 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 2 14:25:31.151480 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 2 14:25:31.208124 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 2 14:25:31.286107 sh[651]: Success Mar 2 14:25:31.416551 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 2 14:25:31.416637 kernel: device-mapper: uevent: version 1.0.3 Mar 2 14:25:31.434736 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 2 14:25:31.523110 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Mar 2 14:25:31.679747 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 2 14:25:31.708664 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 2 14:25:31.805757 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 2 14:25:31.888885 kernel: BTRFS: device fsid 760529e6-8e55-47fc-ad5a-c1c1d184e50a devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (663) Mar 2 14:25:31.888976 kernel: BTRFS info (device dm-0): first mount of filesystem 760529e6-8e55-47fc-ad5a-c1c1d184e50a Mar 2 14:25:31.907978 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 2 14:25:31.998153 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 2 14:25:31.998779 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 2 14:25:32.012641 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 2 14:25:32.022910 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 2 14:25:32.059811 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 2 14:25:32.063653 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 2 14:25:32.092067 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 2 14:25:32.297716 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (698) Mar 2 14:25:32.329692 kernel: BTRFS info (device vda6): first mount of filesystem 81b29f52-362f-4f57-bc73-813781f2dfeb Mar 2 14:25:32.329774 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 14:25:32.397045 kernel: BTRFS info (device vda6): turning on async discard Mar 2 14:25:32.397121 kernel: BTRFS info (device vda6): enabling free space tree Mar 2 14:25:32.452920 kernel: BTRFS info (device vda6): last unmount of filesystem 81b29f52-362f-4f57-bc73-813781f2dfeb Mar 2 14:25:32.502856 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 2 14:25:32.528677 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 2 14:25:32.975073 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 2 14:25:32.996493 ignition[773]: Ignition 2.22.0 Mar 2 14:25:32.996504 ignition[773]: Stage: fetch-offline Mar 2 14:25:33.025863 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 2 14:25:32.996693 ignition[773]: no configs at "/usr/lib/ignition/base.d" Mar 2 14:25:32.996707 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 14:25:32.996829 ignition[773]: parsed url from cmdline: "" Mar 2 14:25:32.996835 ignition[773]: no config URL provided Mar 2 14:25:32.996842 ignition[773]: reading system config file "/usr/lib/ignition/user.ign" Mar 2 14:25:32.996858 ignition[773]: no config at "/usr/lib/ignition/user.ign" Mar 2 14:25:32.997061 ignition[773]: op(1): [started] loading QEMU firmware config module Mar 2 14:25:32.997071 ignition[773]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 2 14:25:33.034041 ignition[773]: op(1): [finished] loading QEMU firmware config module Mar 2 14:25:33.276990 systemd-networkd[841]: lo: Link UP Mar 2 14:25:33.279210 systemd-networkd[841]: lo: Gained carrier Mar 2 14:25:33.284113 systemd-networkd[841]: Enumeration completed Mar 2 14:25:33.284642 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 2 14:25:33.291018 systemd-networkd[841]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 14:25:33.291026 systemd-networkd[841]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 2 14:25:33.302014 systemd-networkd[841]: eth0: Link UP Mar 2 14:25:33.304035 systemd-networkd[841]: eth0: Gained carrier Mar 2 14:25:33.304055 systemd-networkd[841]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 14:25:33.451971 systemd[1]: Reached target network.target - Network. Mar 2 14:25:33.524757 systemd-networkd[841]: eth0: DHCPv4 address 10.0.0.7/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 2 14:25:34.311754 ignition[773]: parsing config with SHA512: 18a88f046de90dc1d1ade67aa0b10b6198eedb463fd26f21cf6c2fcd0af7fc0cea8a09afb133800b6d8a35b0319bee65254da2e2513c778f3dbc7b6f03def345 Mar 2 14:25:34.330733 unknown[773]: fetched base config from "system" Mar 2 14:25:34.330751 unknown[773]: fetched user config from "qemu" Mar 2 14:25:34.333461 ignition[773]: fetch-offline: fetch-offline passed Mar 2 14:25:34.344772 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 2 14:25:34.333559 ignition[773]: Ignition finished successfully Mar 2 14:25:34.369747 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 2 14:25:34.376796 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 2 14:25:34.720509 ignition[846]: Ignition 2.22.0 Mar 2 14:25:34.721506 ignition[846]: Stage: kargs Mar 2 14:25:34.730850 ignition[846]: no configs at "/usr/lib/ignition/base.d" Mar 2 14:25:34.730866 ignition[846]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 14:25:34.762630 ignition[846]: kargs: kargs passed Mar 2 14:25:34.762778 ignition[846]: Ignition finished successfully Mar 2 14:25:34.793830 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 2 14:25:34.825846 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 2 14:25:35.053010 systemd-networkd[841]: eth0: Gained IPv6LL Mar 2 14:25:35.221860 ignition[854]: Ignition 2.22.0 Mar 2 14:25:35.226381 ignition[854]: Stage: disks Mar 2 14:25:35.226589 ignition[854]: no configs at "/usr/lib/ignition/base.d" Mar 2 14:25:35.226604 ignition[854]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 14:25:35.227987 ignition[854]: disks: disks passed Mar 2 14:25:35.228116 ignition[854]: Ignition finished successfully Mar 2 14:25:35.267066 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 2 14:25:35.285710 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 2 14:25:35.285935 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 2 14:25:35.286003 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 2 14:25:35.286059 systemd[1]: Reached target sysinit.target - System Initialization. Mar 2 14:25:35.286099 systemd[1]: Reached target basic.target - Basic System. Mar 2 14:25:35.294618 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 2 14:25:35.572877 systemd-fsck[864]: ROOT: clean, 15/553520 files, 52789/553472 blocks Mar 2 14:25:35.602954 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 2 14:25:35.613473 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 2 14:25:36.632561 kernel: EXT4-fs (vda9): mounted filesystem 9d55f1a4-66ad-43d6-b325-f6b8d2d08c3e r/w with ordered data mode. Quota mode: none. Mar 2 14:25:36.637169 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 2 14:25:36.661122 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 2 14:25:36.709895 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 2 14:25:36.761590 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 2 14:25:36.784906 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 2 14:25:36.881680 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (873) Mar 2 14:25:36.881718 kernel: BTRFS info (device vda6): first mount of filesystem 81b29f52-362f-4f57-bc73-813781f2dfeb Mar 2 14:25:36.881735 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 14:25:36.784981 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 2 14:25:36.785448 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 2 14:25:36.851968 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 2 14:25:36.925169 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 2 14:25:36.961205 kernel: BTRFS info (device vda6): turning on async discard Mar 2 14:25:36.961674 kernel: BTRFS info (device vda6): enabling free space tree Mar 2 14:25:36.986594 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 2 14:25:37.212062 initrd-setup-root[897]: cut: /sysroot/etc/passwd: No such file or directory Mar 2 14:25:37.250527 initrd-setup-root[904]: cut: /sysroot/etc/group: No such file or directory Mar 2 14:25:37.307511 initrd-setup-root[911]: cut: /sysroot/etc/shadow: No such file or directory Mar 2 14:25:37.350925 initrd-setup-root[918]: cut: /sysroot/etc/gshadow: No such file or directory Mar 2 14:25:37.941647 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 2 14:25:37.979184 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 2 14:25:37.994235 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 2 14:25:38.070794 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 2 14:25:38.090678 kernel: BTRFS info (device vda6): last unmount of filesystem 81b29f52-362f-4f57-bc73-813781f2dfeb Mar 2 14:25:38.220677 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 2 14:25:38.335775 ignition[986]: INFO : Ignition 2.22.0 Mar 2 14:25:38.344672 ignition[986]: INFO : Stage: mount Mar 2 14:25:38.355507 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 14:25:38.355507 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 14:25:38.383592 ignition[986]: INFO : mount: mount passed Mar 2 14:25:38.383592 ignition[986]: INFO : Ignition finished successfully Mar 2 14:25:38.404973 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 2 14:25:38.421555 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 2 14:25:38.510172 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 2 14:25:38.606454 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (999) Mar 2 14:25:38.606531 kernel: BTRFS info (device vda6): first mount of filesystem 81b29f52-362f-4f57-bc73-813781f2dfeb Mar 2 14:25:38.622551 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 14:25:38.665590 kernel: BTRFS info (device vda6): turning on async discard Mar 2 14:25:38.665676 kernel: BTRFS info (device vda6): enabling free space tree Mar 2 14:25:38.678792 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 2 14:25:38.783673 ignition[1016]: INFO : Ignition 2.22.0 Mar 2 14:25:38.783673 ignition[1016]: INFO : Stage: files Mar 2 14:25:38.802904 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 14:25:38.802904 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 14:25:38.802904 ignition[1016]: DEBUG : files: compiled without relabeling support, skipping Mar 2 14:25:38.802904 ignition[1016]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 2 14:25:38.802904 ignition[1016]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 2 14:25:38.868683 ignition[1016]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 2 14:25:38.868683 ignition[1016]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 2 14:25:38.868683 ignition[1016]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 2 14:25:38.868683 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 2 14:25:38.868683 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 2 14:25:38.817907 unknown[1016]: wrote ssh authorized keys file for user: core Mar 2 14:25:39.239924 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 2 14:25:39.399250 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 2 14:25:39.399250 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 2 14:25:39.437894 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 2 14:25:39.437894 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 2 14:25:39.437894 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 2 14:25:39.437894 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 2 14:25:39.437894 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 2 14:25:39.437894 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 2 14:25:39.437894 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 2 14:25:39.437894 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 2 14:25:39.437894 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 2 14:25:39.437894 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 2 14:25:39.437894 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 2 14:25:39.437894 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 2 14:25:39.437894 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Mar 2 14:25:39.914436 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 2 14:25:40.668812 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 2 14:25:40.668812 ignition[1016]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 2 14:25:40.709433 ignition[1016]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 2 14:25:40.733552 ignition[1016]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 2 14:25:40.733552 ignition[1016]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 2 14:25:40.733552 ignition[1016]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 2 14:25:40.733552 ignition[1016]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 2 14:25:40.807610 ignition[1016]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 2 14:25:40.807610 ignition[1016]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 2 14:25:40.807610 ignition[1016]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 2 14:25:40.916559 ignition[1016]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 2 14:25:40.937711 ignition[1016]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 2 14:25:40.959715 ignition[1016]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 2 14:25:40.959715 ignition[1016]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 2 14:25:40.959715 ignition[1016]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 2 14:25:40.959715 ignition[1016]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 2 14:25:40.959715 ignition[1016]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 2 14:25:40.959715 ignition[1016]: INFO : files: files passed Mar 2 14:25:40.959715 ignition[1016]: INFO : Ignition finished successfully Mar 2 14:25:41.012610 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 2 14:25:41.115558 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 2 14:25:41.152783 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 2 14:25:41.221748 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 2 14:25:41.265904 initrd-setup-root-after-ignition[1044]: grep: /sysroot/oem/oem-release: No such file or directory Mar 2 14:25:41.224774 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 2 14:25:41.309879 initrd-setup-root-after-ignition[1046]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 2 14:25:41.309879 initrd-setup-root-after-ignition[1046]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 2 14:25:41.353162 initrd-setup-root-after-ignition[1050]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 2 14:25:41.373907 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 2 14:25:41.389839 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 2 14:25:41.443106 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 2 14:25:41.610144 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 2 14:25:41.610653 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 2 14:25:41.657807 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 2 14:25:41.678003 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 2 14:25:41.705163 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 2 14:25:41.708917 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 2 14:25:41.833863 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 2 14:25:41.863784 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 2 14:25:41.936471 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 2 14:25:41.949729 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 14:25:41.978133 systemd[1]: Stopped target timers.target - Timer Units. Mar 2 14:25:41.989896 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 2 14:25:41.990116 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 2 14:25:42.029177 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 2 14:25:42.063071 systemd[1]: Stopped target basic.target - Basic System. Mar 2 14:25:42.065256 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 2 14:25:42.110580 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 2 14:25:42.126057 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 2 14:25:42.150551 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 2 14:25:42.163242 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 2 14:25:42.206798 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 2 14:25:42.220571 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 2 14:25:42.255861 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 2 14:25:42.285181 systemd[1]: Stopped target swap.target - Swaps. Mar 2 14:25:42.304874 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 2 14:25:42.306084 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 2 14:25:42.342532 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 2 14:25:42.356784 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 14:25:42.421123 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 2 14:25:42.437397 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 14:25:42.453815 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 2 14:25:42.454000 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 2 14:25:42.513917 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 2 14:25:42.514208 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 2 14:25:42.528229 systemd[1]: Stopped target paths.target - Path Units. Mar 2 14:25:42.594748 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 2 14:25:42.607139 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 14:25:42.621766 systemd[1]: Stopped target slices.target - Slice Units. Mar 2 14:25:42.633479 systemd[1]: Stopped target sockets.target - Socket Units. Mar 2 14:25:42.680682 systemd[1]: iscsid.socket: Deactivated successfully. Mar 2 14:25:42.680827 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 2 14:25:42.699174 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 2 14:25:42.701071 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 2 14:25:42.716532 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 2 14:25:42.716721 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 2 14:25:42.731005 systemd[1]: ignition-files.service: Deactivated successfully. Mar 2 14:25:42.731173 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 2 14:25:42.760958 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 2 14:25:42.774959 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 2 14:25:42.775242 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 14:25:42.799705 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 2 14:25:42.834088 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 2 14:25:42.834608 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 14:25:42.863231 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 2 14:25:42.863626 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 2 14:25:42.945231 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 2 14:25:42.945965 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 2 14:25:42.983953 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 2 14:25:43.006896 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 2 14:25:43.007258 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 2 14:25:43.030236 ignition[1071]: INFO : Ignition 2.22.0 Mar 2 14:25:43.030236 ignition[1071]: INFO : Stage: umount Mar 2 14:25:43.030236 ignition[1071]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 14:25:43.030236 ignition[1071]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 14:25:43.030236 ignition[1071]: INFO : umount: umount passed Mar 2 14:25:43.030236 ignition[1071]: INFO : Ignition finished successfully Mar 2 14:25:43.045100 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 2 14:25:43.045564 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 2 14:25:43.081216 systemd[1]: Stopped target network.target - Network. Mar 2 14:25:43.101257 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 2 14:25:43.101594 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 2 14:25:43.109207 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 2 14:25:43.109501 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 2 14:25:43.142077 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 2 14:25:43.142187 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 2 14:25:43.157662 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 2 14:25:43.157754 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 2 14:25:43.177092 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 2 14:25:43.177187 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 2 14:25:43.192048 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 2 14:25:43.211259 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 2 14:25:43.251011 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 2 14:25:43.251456 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 2 14:25:43.337476 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 2 14:25:43.349650 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 2 14:25:43.349946 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 2 14:25:43.377728 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 2 14:25:43.381767 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 2 14:25:43.426689 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 2 14:25:43.426848 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 2 14:25:43.483630 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 2 14:25:43.483871 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 2 14:25:43.483968 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 2 14:25:43.576242 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 2 14:25:43.577778 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 2 14:25:43.613959 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 2 14:25:43.614045 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 2 14:25:43.625022 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 2 14:25:43.625114 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 14:25:43.661757 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 14:25:43.664732 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 2 14:25:43.664840 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 2 14:25:43.754103 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 2 14:25:43.761904 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 14:25:43.777679 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 2 14:25:43.777900 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 2 14:25:43.807913 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 2 14:25:43.808003 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 2 14:25:43.818543 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 2 14:25:43.818606 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 14:25:43.844723 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 2 14:25:43.844832 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 2 14:25:43.905604 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 2 14:25:43.905720 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 2 14:25:43.920217 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 2 14:25:43.920500 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 14:25:43.938866 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 2 14:25:43.965907 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 2 14:25:43.966024 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 2 14:25:44.016168 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 2 14:25:44.016462 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 14:25:44.047937 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 14:25:44.048035 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 14:25:44.078599 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Mar 2 14:25:44.078690 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 2 14:25:44.078759 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 2 14:25:44.081534 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 2 14:25:44.081751 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 2 14:25:44.099700 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 2 14:25:44.115479 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 2 14:25:44.203215 systemd[1]: Switching root. Mar 2 14:25:44.315793 systemd-journald[204]: Journal stopped Mar 2 14:25:48.905496 systemd-journald[204]: Received SIGTERM from PID 1 (systemd). Mar 2 14:25:48.905590 kernel: SELinux: policy capability network_peer_controls=1 Mar 2 14:25:48.905620 kernel: SELinux: policy capability open_perms=1 Mar 2 14:25:48.905641 kernel: SELinux: policy capability extended_socket_class=1 Mar 2 14:25:48.905656 kernel: SELinux: policy capability always_check_network=0 Mar 2 14:25:48.905671 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 2 14:25:48.905686 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 2 14:25:48.905701 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 2 14:25:48.905720 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 2 14:25:48.905735 kernel: SELinux: policy capability userspace_initial_context=0 Mar 2 14:25:48.905757 kernel: audit: type=1403 audit(1772461544.770:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 2 14:25:48.905772 systemd[1]: Successfully loaded SELinux policy in 171.503ms. Mar 2 14:25:48.905796 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.249ms. Mar 2 14:25:48.905818 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 2 14:25:48.905834 systemd[1]: Detected virtualization kvm. Mar 2 14:25:48.905848 systemd[1]: Detected architecture x86-64. Mar 2 14:25:48.905870 systemd[1]: Detected first boot. Mar 2 14:25:48.905887 systemd[1]: Initializing machine ID from VM UUID. Mar 2 14:25:48.905903 zram_generator::config[1116]: No configuration found. Mar 2 14:25:48.905919 kernel: Guest personality initialized and is inactive Mar 2 14:25:48.905935 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Mar 2 14:25:48.905950 kernel: Initialized host personality Mar 2 14:25:48.905964 kernel: NET: Registered PF_VSOCK protocol family Mar 2 14:25:48.905980 systemd[1]: Populated /etc with preset unit settings. Mar 2 14:25:48.905997 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 2 14:25:48.906017 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 2 14:25:48.906032 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 2 14:25:48.906048 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 2 14:25:48.906064 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 2 14:25:48.906079 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 2 14:25:48.906095 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 2 14:25:48.906110 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 2 14:25:48.906126 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 2 14:25:48.906144 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 2 14:25:48.906162 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 2 14:25:48.906177 systemd[1]: Created slice user.slice - User and Session Slice. Mar 2 14:25:48.906193 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 14:25:48.906209 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 14:25:48.906226 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 2 14:25:48.906242 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 2 14:25:48.906257 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 2 14:25:48.906502 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 2 14:25:48.906524 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 2 14:25:48.906543 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 14:25:48.906559 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 2 14:25:48.906577 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 2 14:25:48.906595 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 2 14:25:48.906615 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 2 14:25:48.906634 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 2 14:25:48.906652 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 14:25:48.906673 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 2 14:25:48.906690 systemd[1]: Reached target slices.target - Slice Units. Mar 2 14:25:48.906707 systemd[1]: Reached target swap.target - Swaps. Mar 2 14:25:48.906724 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 2 14:25:48.906741 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 2 14:25:48.906761 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 2 14:25:48.906778 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 2 14:25:48.906796 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 2 14:25:48.906813 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 14:25:48.906829 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 2 14:25:48.906850 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 2 14:25:48.906866 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 2 14:25:48.906884 systemd[1]: Mounting media.mount - External Media Directory... Mar 2 14:25:48.906901 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 14:25:48.906919 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 2 14:25:48.906937 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 2 14:25:48.906954 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 2 14:25:48.906973 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 2 14:25:48.906997 systemd[1]: Reached target machines.target - Containers. Mar 2 14:25:48.907017 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 2 14:25:48.907036 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 14:25:48.907055 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 2 14:25:48.907074 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 2 14:25:48.907093 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 14:25:48.907108 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 2 14:25:48.907123 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 14:25:48.907145 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 2 14:25:48.907161 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 14:25:48.907177 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 2 14:25:48.907192 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 2 14:25:48.907207 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 2 14:25:48.907223 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 2 14:25:48.907238 systemd[1]: Stopped systemd-fsck-usr.service. Mar 2 14:25:48.907255 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 2 14:25:48.907472 kernel: ACPI: bus type drm_connector registered Mar 2 14:25:48.907490 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 2 14:25:48.907506 kernel: loop: module loaded Mar 2 14:25:48.907521 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 2 14:25:48.907536 kernel: fuse: init (API version 7.41) Mar 2 14:25:48.907551 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 2 14:25:48.907600 systemd-journald[1201]: Collecting audit messages is disabled. Mar 2 14:25:48.907629 systemd-journald[1201]: Journal started Mar 2 14:25:48.907661 systemd-journald[1201]: Runtime Journal (/run/log/journal/a5719e7e75bf4cec95fa3ab063f50be4) is 6M, max 48.1M, 42.1M free. Mar 2 14:25:46.563172 systemd[1]: Queued start job for default target multi-user.target. Mar 2 14:25:46.604459 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 2 14:25:46.606069 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 2 14:25:46.607547 systemd[1]: systemd-journald.service: Consumed 2.113s CPU time. Mar 2 14:25:48.935575 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 2 14:25:48.964736 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 2 14:25:49.001730 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 2 14:25:49.015500 systemd[1]: verity-setup.service: Deactivated successfully. Mar 2 14:25:49.015594 systemd[1]: Stopped verity-setup.service. Mar 2 14:25:49.042555 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 14:25:49.063974 systemd[1]: Started systemd-journald.service - Journal Service. Mar 2 14:25:49.067032 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 2 14:25:49.088012 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 2 14:25:49.102261 systemd[1]: Mounted media.mount - External Media Directory. Mar 2 14:25:49.109909 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 2 14:25:49.118775 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 2 14:25:49.128764 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 2 14:25:49.140444 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 2 14:25:49.153922 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 14:25:49.168008 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 2 14:25:49.171126 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 2 14:25:49.192524 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 14:25:49.192964 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 14:25:49.202642 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 2 14:25:49.203067 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 2 14:25:49.215124 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 14:25:49.215858 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 14:25:49.231610 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 2 14:25:49.232815 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 2 14:25:49.250819 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 14:25:49.251618 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 14:25:49.264922 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 2 14:25:49.289008 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 2 14:25:49.309031 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 2 14:25:49.324173 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 2 14:25:49.338851 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 14:25:49.388959 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 2 14:25:49.406744 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 2 14:25:49.431644 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 2 14:25:49.445922 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 2 14:25:49.446603 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 2 14:25:49.476918 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 2 14:25:49.508223 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 2 14:25:49.536217 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 14:25:49.552933 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 2 14:25:49.577872 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 2 14:25:49.592205 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 2 14:25:49.618559 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 2 14:25:49.630132 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 2 14:25:49.637252 systemd-journald[1201]: Time spent on flushing to /var/log/journal/a5719e7e75bf4cec95fa3ab063f50be4 is 44.381ms for 1059 entries. Mar 2 14:25:49.637252 systemd-journald[1201]: System Journal (/var/log/journal/a5719e7e75bf4cec95fa3ab063f50be4) is 8M, max 195.6M, 187.6M free. Mar 2 14:25:49.741626 systemd-journald[1201]: Received client request to flush runtime journal. Mar 2 14:25:49.638117 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 14:25:49.681680 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 2 14:25:49.708780 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 2 14:25:49.727960 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 2 14:25:49.738890 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 2 14:25:49.750699 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 2 14:25:49.790897 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 2 14:25:49.803850 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 2 14:25:49.820041 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 2 14:25:49.848130 kernel: loop0: detected capacity change from 0 to 110984 Mar 2 14:25:49.850463 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 14:25:49.917076 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 2 14:25:49.920969 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 2 14:25:50.031793 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 2 14:25:50.519738 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 2 14:25:50.551968 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 2 14:25:50.611499 kernel: loop1: detected capacity change from 0 to 128560 Mar 2 14:25:50.879216 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Mar 2 14:25:50.879236 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Mar 2 14:25:50.911552 kernel: loop2: detected capacity change from 0 to 217752 Mar 2 14:25:50.914032 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 14:25:51.085521 kernel: loop3: detected capacity change from 0 to 110984 Mar 2 14:25:51.176411 kernel: loop4: detected capacity change from 0 to 128560 Mar 2 14:25:51.376647 kernel: loop5: detected capacity change from 0 to 217752 Mar 2 14:25:51.478142 (sd-merge)[1259]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 2 14:25:51.484016 (sd-merge)[1259]: Merged extensions into '/usr'. Mar 2 14:25:51.498039 systemd[1]: Reload requested from client PID 1236 ('systemd-sysext') (unit systemd-sysext.service)... Mar 2 14:25:51.498619 systemd[1]: Reloading... Mar 2 14:25:51.817974 zram_generator::config[1281]: No configuration found. Mar 2 14:25:52.536460 ldconfig[1231]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 2 14:25:52.548248 systemd[1]: Reloading finished in 1048 ms. Mar 2 14:25:52.592195 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 2 14:25:52.608198 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 2 14:25:52.622712 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 2 14:25:52.675512 systemd[1]: Starting ensure-sysext.service... Mar 2 14:25:52.690004 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 2 14:25:52.704106 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 14:25:52.751535 systemd-tmpfiles[1324]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 2 14:25:52.751597 systemd-tmpfiles[1324]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 2 14:25:52.751997 systemd-tmpfiles[1324]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 2 14:25:52.752515 systemd-tmpfiles[1324]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 2 14:25:52.754210 systemd-tmpfiles[1324]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 2 14:25:52.754838 systemd-tmpfiles[1324]: ACLs are not supported, ignoring. Mar 2 14:25:52.755135 systemd-tmpfiles[1324]: ACLs are not supported, ignoring. Mar 2 14:25:52.759555 systemd[1]: Reload requested from client PID 1323 ('systemctl') (unit ensure-sysext.service)... Mar 2 14:25:52.759576 systemd[1]: Reloading... Mar 2 14:25:52.762970 systemd-tmpfiles[1324]: Detected autofs mount point /boot during canonicalization of boot. Mar 2 14:25:52.763051 systemd-tmpfiles[1324]: Skipping /boot Mar 2 14:25:52.786509 systemd-tmpfiles[1324]: Detected autofs mount point /boot during canonicalization of boot. Mar 2 14:25:52.786529 systemd-tmpfiles[1324]: Skipping /boot Mar 2 14:25:52.809819 systemd-udevd[1325]: Using default interface naming scheme 'v255'. Mar 2 14:25:52.892556 zram_generator::config[1349]: No configuration found. Mar 2 14:25:53.179427 kernel: mousedev: PS/2 mouse device common for all mice Mar 2 14:25:53.204722 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Mar 2 14:25:53.216539 kernel: ACPI: button: Power Button [PWRF] Mar 2 14:25:53.296434 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 2 14:25:53.296831 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 2 14:25:53.297142 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 2 14:25:53.400877 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 2 14:25:53.402100 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 2 14:25:53.411786 systemd[1]: Reloading finished in 651 ms. Mar 2 14:25:53.429765 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 14:25:53.458756 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 14:25:53.550789 systemd[1]: Finished ensure-sysext.service. Mar 2 14:25:53.627662 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 14:25:53.633854 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 2 14:25:53.650976 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 2 14:25:53.667535 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 14:25:53.847650 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 14:25:53.865602 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 2 14:25:53.881203 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 14:25:53.922266 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 14:25:53.944798 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 14:25:53.949593 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 2 14:25:53.984146 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 2 14:25:54.022619 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 2 14:25:54.054559 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 2 14:25:54.090065 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 2 14:25:54.120507 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 2 14:25:54.146943 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 2 14:25:54.173533 augenrules[1476]: No rules Mar 2 14:25:54.376088 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 14:25:54.388960 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 14:25:54.391882 systemd[1]: audit-rules.service: Deactivated successfully. Mar 2 14:25:54.394024 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 2 14:25:54.536797 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 2 14:25:54.559817 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 14:25:54.560209 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 14:25:54.574534 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 2 14:25:54.574937 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 2 14:25:54.588940 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 14:25:54.589490 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 14:25:54.590255 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 14:25:54.590727 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 14:25:54.591906 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 2 14:25:54.603126 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 2 14:25:54.635634 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 2 14:25:54.635995 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 2 14:25:54.643059 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 2 14:25:54.685569 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 2 14:25:54.687039 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 2 14:25:54.691762 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 2 14:25:54.749087 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 2 14:25:54.832799 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 14:25:54.921662 kernel: kvm_amd: TSC scaling supported Mar 2 14:25:54.921871 kernel: kvm_amd: Nested Virtualization enabled Mar 2 14:25:54.921908 kernel: kvm_amd: Nested Paging enabled Mar 2 14:25:54.925724 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 2 14:25:54.930825 kernel: kvm_amd: PMU virtualization is disabled Mar 2 14:25:54.955256 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 2 14:25:55.407621 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 2 14:25:55.420902 systemd[1]: Reached target time-set.target - System Time Set. Mar 2 14:25:55.438244 systemd-networkd[1469]: lo: Link UP Mar 2 14:25:55.438253 systemd-networkd[1469]: lo: Gained carrier Mar 2 14:25:55.446460 systemd-networkd[1469]: Enumeration completed Mar 2 14:25:55.446569 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 2 14:25:55.448726 systemd-networkd[1469]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 14:25:55.448736 systemd-networkd[1469]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 2 14:25:55.450666 systemd-networkd[1469]: eth0: Link UP Mar 2 14:25:55.450933 systemd-networkd[1469]: eth0: Gained carrier Mar 2 14:25:55.450992 systemd-networkd[1469]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 14:25:55.477616 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 2 14:25:55.503976 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 2 14:25:55.547593 systemd-resolved[1471]: Positive Trust Anchors: Mar 2 14:25:55.547664 systemd-resolved[1471]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 2 14:25:55.547708 systemd-resolved[1471]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 2 14:25:55.557876 systemd-resolved[1471]: Defaulting to hostname 'linux'. Mar 2 14:25:55.561564 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 2 14:25:55.573614 systemd[1]: Reached target network.target - Network. Mar 2 14:25:55.581839 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 2 14:25:55.592224 systemd[1]: Reached target sysinit.target - System Initialization. Mar 2 14:25:55.601259 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 2 14:25:55.612772 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 2 14:25:55.616920 systemd-networkd[1469]: eth0: DHCPv4 address 10.0.0.7/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 2 14:25:55.623543 systemd-timesyncd[1473]: Network configuration changed, trying to establish connection. Mar 2 14:25:56.242988 systemd-timesyncd[1473]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 2 14:25:56.243177 systemd-timesyncd[1473]: Initial clock synchronization to Mon 2026-03-02 14:25:56.242908 UTC. Mar 2 14:25:56.243220 systemd-resolved[1471]: Clock change detected. Flushing caches. Mar 2 14:25:56.243484 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Mar 2 14:25:56.254764 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 2 14:25:56.264458 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 2 14:25:56.278317 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 2 14:25:56.292535 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 2 14:25:56.292716 systemd[1]: Reached target paths.target - Path Units. Mar 2 14:25:56.302440 systemd[1]: Reached target timers.target - Timer Units. Mar 2 14:25:56.317953 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 2 14:25:56.333417 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 2 14:25:56.351749 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 2 14:25:56.364948 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 2 14:25:56.380723 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 2 14:25:56.401335 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 2 14:25:56.414572 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 2 14:25:56.434958 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 2 14:25:56.452763 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 2 14:25:56.469649 systemd[1]: Reached target sockets.target - Socket Units. Mar 2 14:25:56.481802 systemd[1]: Reached target basic.target - Basic System. Mar 2 14:25:56.491763 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 2 14:25:56.491810 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 2 14:25:56.495934 systemd[1]: Starting containerd.service - containerd container runtime... Mar 2 14:25:56.507720 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 2 14:25:56.521559 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 2 14:25:56.538705 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 2 14:25:56.554687 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 2 14:25:56.564458 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 2 14:25:56.569215 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Mar 2 14:25:56.582865 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 2 14:25:56.598408 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 2 14:25:56.599644 jq[1517]: false Mar 2 14:25:56.615683 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 2 14:25:56.638583 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 2 14:25:56.641455 extend-filesystems[1518]: Found /dev/vda6 Mar 2 14:25:56.684400 extend-filesystems[1518]: Found /dev/vda9 Mar 2 14:25:56.684400 extend-filesystems[1518]: Checking size of /dev/vda9 Mar 2 14:25:56.668864 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 2 14:25:56.733435 extend-filesystems[1518]: Resized partition /dev/vda9 Mar 2 14:25:56.697771 oslogin_cache_refresh[1519]: Refreshing passwd entry cache Mar 2 14:25:56.744328 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Refreshing passwd entry cache Mar 2 14:25:56.744328 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Failure getting users, quitting Mar 2 14:25:56.744328 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 2 14:25:56.744328 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Refreshing group entry cache Mar 2 14:25:56.698266 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 2 14:25:56.744635 extend-filesystems[1541]: resize2fs 1.47.3 (8-Jul-2025) Mar 2 14:25:56.830489 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 2 14:25:56.738642 oslogin_cache_refresh[1519]: Failure getting users, quitting Mar 2 14:25:56.702550 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 2 14:25:56.830837 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Failure getting groups, quitting Mar 2 14:25:56.830837 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 2 14:25:56.830924 update_engine[1537]: I20260302 14:25:56.818882 1537 main.cc:92] Flatcar Update Engine starting Mar 2 14:25:56.738738 oslogin_cache_refresh[1519]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 2 14:25:56.710505 systemd[1]: Starting update-engine.service - Update Engine... Mar 2 14:25:56.738882 oslogin_cache_refresh[1519]: Refreshing group entry cache Mar 2 14:25:56.742601 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 2 14:25:56.761765 oslogin_cache_refresh[1519]: Failure getting groups, quitting Mar 2 14:25:56.775716 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 2 14:25:56.761789 oslogin_cache_refresh[1519]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 2 14:25:56.797692 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 2 14:25:56.798159 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 2 14:25:56.798711 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Mar 2 14:25:56.799002 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Mar 2 14:25:56.813360 systemd[1]: motdgen.service: Deactivated successfully. Mar 2 14:25:56.813768 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 2 14:25:56.832495 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 2 14:25:56.835883 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 2 14:25:56.855523 jq[1540]: true Mar 2 14:25:56.904276 (ntainerd)[1557]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 2 14:25:56.919612 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 2 14:25:56.934736 jq[1552]: true Mar 2 14:25:56.950726 tar[1547]: linux-amd64/LICENSE Mar 2 14:25:56.952247 tar[1547]: linux-amd64/helm Mar 2 14:25:56.954223 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 2 14:25:56.998264 extend-filesystems[1541]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 2 14:25:56.998264 extend-filesystems[1541]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 2 14:25:56.998264 extend-filesystems[1541]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 2 14:25:57.051580 extend-filesystems[1518]: Resized filesystem in /dev/vda9 Mar 2 14:25:56.999766 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 2 14:25:57.041473 dbus-daemon[1515]: [system] SELinux support is enabled Mar 2 14:25:57.109807 update_engine[1537]: I20260302 14:25:57.068727 1537 update_check_scheduler.cc:74] Next update check in 10m3s Mar 2 14:25:57.000525 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 2 14:25:57.048996 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 2 14:25:57.082378 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 2 14:25:57.082417 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 2 14:25:57.120413 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 2 14:25:57.120455 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 2 14:25:57.131713 systemd[1]: Started update-engine.service - Update Engine. Mar 2 14:25:57.136877 systemd-logind[1531]: Watching system buttons on /dev/input/event2 (Power Button) Mar 2 14:25:57.136911 systemd-logind[1531]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 2 14:25:57.137356 systemd-logind[1531]: New seat seat0. Mar 2 14:25:57.146910 systemd[1]: Started systemd-logind.service - User Login Management. Mar 2 14:25:57.310567 kernel: hrtimer: interrupt took 3229389 ns Mar 2 14:25:57.351167 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 2 14:25:57.421257 bash[1581]: Updated "/home/core/.ssh/authorized_keys" Mar 2 14:25:57.429953 kernel: EDAC MC: Ver: 3.0.0 Mar 2 14:25:57.427197 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 2 14:25:57.438446 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 2 14:25:57.841525 systemd-networkd[1469]: eth0: Gained IPv6LL Mar 2 14:25:57.859265 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 2 14:25:57.869918 systemd[1]: Reached target network-online.target - Network is Online. Mar 2 14:25:57.883279 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 2 14:25:57.905634 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 14:25:57.917641 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 2 14:25:58.002408 locksmithd[1582]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 2 14:25:58.124861 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 2 14:25:58.144719 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 2 14:25:58.145353 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 2 14:25:58.155916 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 2 14:25:58.515608 sshd_keygen[1548]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 2 14:25:58.898458 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 2 14:25:58.917447 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 2 14:25:58.937605 systemd[1]: Started sshd@0-10.0.0.7:22-10.0.0.1:48830.service - OpenSSH per-connection server daemon (10.0.0.1:48830). Mar 2 14:25:59.019844 systemd[1]: issuegen.service: Deactivated successfully. Mar 2 14:25:59.020545 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 2 14:25:59.036851 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 2 14:25:59.342814 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 2 14:25:59.373968 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 2 14:25:59.475655 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 2 14:25:59.495274 systemd[1]: Reached target getty.target - Login Prompts. Mar 2 14:26:00.023638 containerd[1557]: time="2026-03-02T14:26:00Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 2 14:26:00.267765 containerd[1557]: time="2026-03-02T14:26:00.262681986Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 2 14:26:00.348343 containerd[1557]: time="2026-03-02T14:26:00.346849639Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="265.376µs" Mar 2 14:26:00.348343 containerd[1557]: time="2026-03-02T14:26:00.346955557Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 2 14:26:00.348343 containerd[1557]: time="2026-03-02T14:26:00.348145327Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 2 14:26:00.349394 containerd[1557]: time="2026-03-02T14:26:00.348598754Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 2 14:26:00.349394 containerd[1557]: time="2026-03-02T14:26:00.348631405Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 2 14:26:00.349394 containerd[1557]: time="2026-03-02T14:26:00.348669145Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 2 14:26:00.349394 containerd[1557]: time="2026-03-02T14:26:00.348832791Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 2 14:26:00.349535 containerd[1557]: time="2026-03-02T14:26:00.348999713Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 2 14:26:00.350895 sshd[1621]: Accepted publickey for core from 10.0.0.1 port 48830 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:26:00.351529 containerd[1557]: time="2026-03-02T14:26:00.350929596Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 2 14:26:00.351529 containerd[1557]: time="2026-03-02T14:26:00.350958019Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 2 14:26:00.351529 containerd[1557]: time="2026-03-02T14:26:00.350975130Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 2 14:26:00.351529 containerd[1557]: time="2026-03-02T14:26:00.350992654Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 2 14:26:00.351640 containerd[1557]: time="2026-03-02T14:26:00.351607551Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 2 14:26:00.353380 containerd[1557]: time="2026-03-02T14:26:00.352827589Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 2 14:26:00.353380 containerd[1557]: time="2026-03-02T14:26:00.352952402Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 2 14:26:00.353380 containerd[1557]: time="2026-03-02T14:26:00.352977379Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 2 14:26:00.354443 containerd[1557]: time="2026-03-02T14:26:00.353803591Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 2 14:26:00.358428 containerd[1557]: time="2026-03-02T14:26:00.356562372Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 2 14:26:00.358428 containerd[1557]: time="2026-03-02T14:26:00.356733030Z" level=info msg="metadata content store policy set" policy=shared Mar 2 14:26:00.361359 sshd-session[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:26:00.377740 containerd[1557]: time="2026-03-02T14:26:00.377653357Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 2 14:26:00.378568 containerd[1557]: time="2026-03-02T14:26:00.378378802Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 2 14:26:00.378568 containerd[1557]: time="2026-03-02T14:26:00.378484990Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 2 14:26:00.378568 containerd[1557]: time="2026-03-02T14:26:00.378505318Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 2 14:26:00.378745 containerd[1557]: time="2026-03-02T14:26:00.378671729Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 2 14:26:00.378779 containerd[1557]: time="2026-03-02T14:26:00.378754764Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 2 14:26:00.378811 containerd[1557]: time="2026-03-02T14:26:00.378782245Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 2 14:26:00.378811 containerd[1557]: time="2026-03-02T14:26:00.378801411Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 2 14:26:00.378889 containerd[1557]: time="2026-03-02T14:26:00.378816900Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 2 14:26:00.378889 containerd[1557]: time="2026-03-02T14:26:00.378830204Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 2 14:26:00.378889 containerd[1557]: time="2026-03-02T14:26:00.378843490Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 2 14:26:00.378889 containerd[1557]: time="2026-03-02T14:26:00.378863026Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 2 14:26:00.381979 containerd[1557]: time="2026-03-02T14:26:00.379315170Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 2 14:26:00.381979 containerd[1557]: time="2026-03-02T14:26:00.379485909Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 2 14:26:00.381979 containerd[1557]: time="2026-03-02T14:26:00.379511166Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 2 14:26:00.381979 containerd[1557]: time="2026-03-02T14:26:00.379741636Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 2 14:26:00.381979 containerd[1557]: time="2026-03-02T14:26:00.379764860Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 2 14:26:00.381979 containerd[1557]: time="2026-03-02T14:26:00.379778325Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 2 14:26:00.381979 containerd[1557]: time="2026-03-02T14:26:00.379792511Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 2 14:26:00.381979 containerd[1557]: time="2026-03-02T14:26:00.379805365Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 2 14:26:00.381979 containerd[1557]: time="2026-03-02T14:26:00.379821155Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 2 14:26:00.381979 containerd[1557]: time="2026-03-02T14:26:00.379836624Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 2 14:26:00.381979 containerd[1557]: time="2026-03-02T14:26:00.379849067Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 2 14:26:00.381979 containerd[1557]: time="2026-03-02T14:26:00.380288237Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 2 14:26:00.381979 containerd[1557]: time="2026-03-02T14:26:00.380327911Z" level=info msg="Start snapshots syncer" Mar 2 14:26:00.381979 containerd[1557]: time="2026-03-02T14:26:00.380435972Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 2 14:26:00.384756 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 2 14:26:00.398811 containerd[1557]: time="2026-03-02T14:26:00.398454182Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 2 14:26:00.398811 containerd[1557]: time="2026-03-02T14:26:00.398625471Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 2 14:26:00.402690 containerd[1557]: time="2026-03-02T14:26:00.401353133Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 2 14:26:00.416272 containerd[1557]: time="2026-03-02T14:26:00.415748749Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 2 14:26:00.416272 containerd[1557]: time="2026-03-02T14:26:00.415883331Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 2 14:26:00.416272 containerd[1557]: time="2026-03-02T14:26:00.415907005Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 2 14:26:00.416272 containerd[1557]: time="2026-03-02T14:26:00.415922434Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 2 14:26:00.416272 containerd[1557]: time="2026-03-02T14:26:00.415938052Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 2 14:26:00.416272 containerd[1557]: time="2026-03-02T14:26:00.415960955Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 2 14:26:00.416272 containerd[1557]: time="2026-03-02T14:26:00.415979059Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 2 14:26:00.416630 containerd[1557]: time="2026-03-02T14:26:00.416317351Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 2 14:26:00.416630 containerd[1557]: time="2026-03-02T14:26:00.416348990Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 2 14:26:00.416630 containerd[1557]: time="2026-03-02T14:26:00.416369598Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 2 14:26:00.416630 containerd[1557]: time="2026-03-02T14:26:00.416603866Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 2 14:26:00.416630 containerd[1557]: time="2026-03-02T14:26:00.416631046Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 2 14:26:00.416771 containerd[1557]: time="2026-03-02T14:26:00.416643951Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 2 14:26:00.416771 containerd[1557]: time="2026-03-02T14:26:00.416657586Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 2 14:26:00.416771 containerd[1557]: time="2026-03-02T14:26:00.416667805Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 2 14:26:00.416771 containerd[1557]: time="2026-03-02T14:26:00.416682272Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 2 14:26:00.416877 containerd[1557]: time="2026-03-02T14:26:00.416787889Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 2 14:26:00.416909 containerd[1557]: time="2026-03-02T14:26:00.416885461Z" level=info msg="runtime interface created" Mar 2 14:26:00.416909 containerd[1557]: time="2026-03-02T14:26:00.416899267Z" level=info msg="created NRI interface" Mar 2 14:26:00.416974 containerd[1557]: time="2026-03-02T14:26:00.416913634Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 2 14:26:00.417616 containerd[1557]: time="2026-03-02T14:26:00.417270200Z" level=info msg="Connect containerd service" Mar 2 14:26:00.417616 containerd[1557]: time="2026-03-02T14:26:00.417336383Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 2 14:26:00.423583 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 2 14:26:00.424966 containerd[1557]: time="2026-03-02T14:26:00.424846825Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 2 14:26:00.460454 systemd-logind[1531]: New session 1 of user core. Mar 2 14:26:00.519450 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 2 14:26:00.556619 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 2 14:26:00.559406 tar[1547]: linux-amd64/README.md Mar 2 14:26:00.621379 (systemd)[1641]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 2 14:26:00.631401 systemd-logind[1531]: New session c1 of user core. Mar 2 14:26:00.635520 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 2 14:26:00.780993 containerd[1557]: time="2026-03-02T14:26:00.780586078Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 2 14:26:00.781861 containerd[1557]: time="2026-03-02T14:26:00.781838516Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 2 14:26:00.782656 containerd[1557]: time="2026-03-02T14:26:00.780767356Z" level=info msg="Start subscribing containerd event" Mar 2 14:26:00.790825 containerd[1557]: time="2026-03-02T14:26:00.782926217Z" level=info msg="Start recovering state" Mar 2 14:26:00.795975 containerd[1557]: time="2026-03-02T14:26:00.795922661Z" level=info msg="Start event monitor" Mar 2 14:26:00.797966 containerd[1557]: time="2026-03-02T14:26:00.797925099Z" level=info msg="Start cni network conf syncer for default" Mar 2 14:26:00.798330 containerd[1557]: time="2026-03-02T14:26:00.798311180Z" level=info msg="Start streaming server" Mar 2 14:26:00.798791 containerd[1557]: time="2026-03-02T14:26:00.798760329Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 2 14:26:00.800194 containerd[1557]: time="2026-03-02T14:26:00.799001839Z" level=info msg="runtime interface starting up..." Mar 2 14:26:00.800194 containerd[1557]: time="2026-03-02T14:26:00.799287042Z" level=info msg="starting plugins..." Mar 2 14:26:00.800194 containerd[1557]: time="2026-03-02T14:26:00.799696476Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 2 14:26:00.800739 systemd[1]: Started containerd.service - containerd container runtime. Mar 2 14:26:00.801845 containerd[1557]: time="2026-03-02T14:26:00.801825581Z" level=info msg="containerd successfully booted in 0.941941s" Mar 2 14:26:01.007576 systemd[1641]: Queued start job for default target default.target. Mar 2 14:26:01.024572 systemd[1641]: Created slice app.slice - User Application Slice. Mar 2 14:26:01.024686 systemd[1641]: Reached target paths.target - Paths. Mar 2 14:26:01.024819 systemd[1641]: Reached target timers.target - Timers. Mar 2 14:26:01.029649 systemd[1641]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 2 14:26:01.070574 systemd[1641]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 2 14:26:01.070791 systemd[1641]: Reached target sockets.target - Sockets. Mar 2 14:26:01.070859 systemd[1641]: Reached target basic.target - Basic System. Mar 2 14:26:01.070924 systemd[1641]: Reached target default.target - Main User Target. Mar 2 14:26:01.070975 systemd[1641]: Startup finished in 405ms. Mar 2 14:26:01.072431 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 2 14:26:01.108314 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 2 14:26:01.161489 systemd[1]: Started sshd@1-10.0.0.7:22-10.0.0.1:38454.service - OpenSSH per-connection server daemon (10.0.0.1:38454). Mar 2 14:26:01.293945 sshd[1665]: Accepted publickey for core from 10.0.0.1 port 38454 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:26:01.301888 sshd-session[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:26:01.340643 systemd-logind[1531]: New session 2 of user core. Mar 2 14:26:01.353842 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 2 14:26:01.417376 sshd[1668]: Connection closed by 10.0.0.1 port 38454 Mar 2 14:26:01.415481 sshd-session[1665]: pam_unix(sshd:session): session closed for user core Mar 2 14:26:01.439317 systemd[1]: sshd@1-10.0.0.7:22-10.0.0.1:38454.service: Deactivated successfully. Mar 2 14:26:01.444300 systemd[1]: session-2.scope: Deactivated successfully. Mar 2 14:26:01.447481 systemd-logind[1531]: Session 2 logged out. Waiting for processes to exit. Mar 2 14:26:01.452844 systemd[1]: Started sshd@2-10.0.0.7:22-10.0.0.1:38470.service - OpenSSH per-connection server daemon (10.0.0.1:38470). Mar 2 14:26:01.485301 systemd-logind[1531]: Removed session 2. Mar 2 14:26:01.642244 sshd[1674]: Accepted publickey for core from 10.0.0.1 port 38470 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:26:01.647550 sshd-session[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:26:01.673717 systemd-logind[1531]: New session 3 of user core. Mar 2 14:26:01.683671 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 2 14:26:01.816418 sshd[1677]: Connection closed by 10.0.0.1 port 38470 Mar 2 14:26:01.813866 sshd-session[1674]: pam_unix(sshd:session): session closed for user core Mar 2 14:26:01.827913 systemd[1]: sshd@2-10.0.0.7:22-10.0.0.1:38470.service: Deactivated successfully. Mar 2 14:26:01.831814 systemd[1]: session-3.scope: Deactivated successfully. Mar 2 14:26:01.835927 systemd-logind[1531]: Session 3 logged out. Waiting for processes to exit. Mar 2 14:26:01.843765 systemd-logind[1531]: Removed session 3. Mar 2 14:26:03.008425 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 14:26:03.034369 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 2 14:26:03.047804 systemd[1]: Startup finished in 19.440s (kernel) + 20.673s (initrd) + 17.821s (userspace) = 57.935s. Mar 2 14:26:03.057373 (kubelet)[1686]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 14:26:04.644511 kubelet[1686]: E0302 14:26:04.643611 1686 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 14:26:04.664382 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 14:26:04.667990 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 14:26:04.670290 systemd[1]: kubelet.service: Consumed 2.685s CPU time, 258.1M memory peak. Mar 2 14:26:11.842841 systemd[1]: Started sshd@3-10.0.0.7:22-10.0.0.1:51260.service - OpenSSH per-connection server daemon (10.0.0.1:51260). Mar 2 14:26:11.978324 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 51260 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:26:11.991740 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:26:12.011454 systemd-logind[1531]: New session 4 of user core. Mar 2 14:26:12.021683 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 2 14:26:12.064901 sshd[1704]: Connection closed by 10.0.0.1 port 51260 Mar 2 14:26:12.066313 sshd-session[1701]: pam_unix(sshd:session): session closed for user core Mar 2 14:26:12.079820 systemd[1]: sshd@3-10.0.0.7:22-10.0.0.1:51260.service: Deactivated successfully. Mar 2 14:26:12.083830 systemd[1]: session-4.scope: Deactivated successfully. Mar 2 14:26:12.090528 systemd-logind[1531]: Session 4 logged out. Waiting for processes to exit. Mar 2 14:26:12.097713 systemd[1]: Started sshd@4-10.0.0.7:22-10.0.0.1:51268.service - OpenSSH per-connection server daemon (10.0.0.1:51268). Mar 2 14:26:12.102651 systemd-logind[1531]: Removed session 4. Mar 2 14:26:12.232768 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 51268 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:26:12.237624 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:26:12.249971 systemd-logind[1531]: New session 5 of user core. Mar 2 14:26:12.261655 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 2 14:26:12.295390 sshd[1713]: Connection closed by 10.0.0.1 port 51268 Mar 2 14:26:12.296591 sshd-session[1710]: pam_unix(sshd:session): session closed for user core Mar 2 14:26:12.321688 systemd[1]: sshd@4-10.0.0.7:22-10.0.0.1:51268.service: Deactivated successfully. Mar 2 14:26:12.324878 systemd[1]: session-5.scope: Deactivated successfully. Mar 2 14:26:12.328363 systemd-logind[1531]: Session 5 logged out. Waiting for processes to exit. Mar 2 14:26:12.333220 systemd[1]: Started sshd@5-10.0.0.7:22-10.0.0.1:51270.service - OpenSSH per-connection server daemon (10.0.0.1:51270). Mar 2 14:26:12.337350 systemd-logind[1531]: Removed session 5. Mar 2 14:26:12.437701 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 51270 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:26:12.438589 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:26:12.454568 systemd-logind[1531]: New session 6 of user core. Mar 2 14:26:12.469685 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 2 14:26:12.519867 sshd[1722]: Connection closed by 10.0.0.1 port 51270 Mar 2 14:26:12.521223 sshd-session[1719]: pam_unix(sshd:session): session closed for user core Mar 2 14:26:12.550805 systemd[1]: sshd@5-10.0.0.7:22-10.0.0.1:51270.service: Deactivated successfully. Mar 2 14:26:12.554904 systemd[1]: session-6.scope: Deactivated successfully. Mar 2 14:26:12.558288 systemd-logind[1531]: Session 6 logged out. Waiting for processes to exit. Mar 2 14:26:12.566427 systemd[1]: Started sshd@6-10.0.0.7:22-10.0.0.1:51276.service - OpenSSH per-connection server daemon (10.0.0.1:51276). Mar 2 14:26:12.572589 systemd-logind[1531]: Removed session 6. Mar 2 14:26:12.703547 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 51276 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:26:12.708925 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:26:12.737427 systemd-logind[1531]: New session 7 of user core. Mar 2 14:26:12.758414 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 2 14:26:12.838408 sudo[1732]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 2 14:26:12.838900 sudo[1732]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 14:26:12.893385 sudo[1732]: pam_unix(sudo:session): session closed for user root Mar 2 14:26:12.899480 sshd[1731]: Connection closed by 10.0.0.1 port 51276 Mar 2 14:26:12.899977 sshd-session[1728]: pam_unix(sshd:session): session closed for user core Mar 2 14:26:12.925490 systemd[1]: sshd@6-10.0.0.7:22-10.0.0.1:51276.service: Deactivated successfully. Mar 2 14:26:12.929696 systemd[1]: session-7.scope: Deactivated successfully. Mar 2 14:26:12.935520 systemd-logind[1531]: Session 7 logged out. Waiting for processes to exit. Mar 2 14:26:12.942648 systemd[1]: Started sshd@7-10.0.0.7:22-10.0.0.1:51292.service - OpenSSH per-connection server daemon (10.0.0.1:51292). Mar 2 14:26:12.949569 systemd-logind[1531]: Removed session 7. Mar 2 14:26:13.082635 sshd[1738]: Accepted publickey for core from 10.0.0.1 port 51292 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:26:13.091282 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:26:13.122315 systemd-logind[1531]: New session 8 of user core. Mar 2 14:26:13.150910 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 2 14:26:13.205387 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 2 14:26:13.205909 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 14:26:13.236462 sudo[1743]: pam_unix(sudo:session): session closed for user root Mar 2 14:26:13.255939 sudo[1742]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 2 14:26:13.257384 sudo[1742]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 14:26:13.295271 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 2 14:26:13.472954 augenrules[1765]: No rules Mar 2 14:26:13.474941 systemd[1]: audit-rules.service: Deactivated successfully. Mar 2 14:26:13.475849 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 2 14:26:13.483258 sudo[1742]: pam_unix(sudo:session): session closed for user root Mar 2 14:26:13.492321 sshd[1741]: Connection closed by 10.0.0.1 port 51292 Mar 2 14:26:13.494627 sshd-session[1738]: pam_unix(sshd:session): session closed for user core Mar 2 14:26:13.513930 systemd[1]: sshd@7-10.0.0.7:22-10.0.0.1:51292.service: Deactivated successfully. Mar 2 14:26:13.520003 systemd[1]: session-8.scope: Deactivated successfully. Mar 2 14:26:13.525323 systemd-logind[1531]: Session 8 logged out. Waiting for processes to exit. Mar 2 14:26:13.530477 systemd[1]: Started sshd@8-10.0.0.7:22-10.0.0.1:51302.service - OpenSSH per-connection server daemon (10.0.0.1:51302). Mar 2 14:26:13.534638 systemd-logind[1531]: Removed session 8. Mar 2 14:26:13.693796 sshd[1774]: Accepted publickey for core from 10.0.0.1 port 51302 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:26:13.697569 sshd-session[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:26:13.717434 systemd-logind[1531]: New session 9 of user core. Mar 2 14:26:13.739641 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 2 14:26:13.794718 sudo[1778]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 2 14:26:13.796435 sudo[1778]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 14:26:14.925679 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 2 14:26:14.977947 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 14:26:17.123261 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 14:26:17.398988 (kubelet)[1799]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 14:26:19.693922 kubelet[1799]: E0302 14:26:19.693328 1799 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 14:26:19.745847 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 14:26:19.746494 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 14:26:19.748322 systemd[1]: kubelet.service: Consumed 3.030s CPU time, 110.8M memory peak. Mar 2 14:26:21.053729 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 2 14:26:21.088625 (dockerd)[1814]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 2 14:26:22.463138 dockerd[1814]: time="2026-03-02T14:26:22.460868638Z" level=info msg="Starting up" Mar 2 14:26:22.474275 dockerd[1814]: time="2026-03-02T14:26:22.473454244Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 2 14:26:23.034998 dockerd[1814]: time="2026-03-02T14:26:23.034517004Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 2 14:26:23.726672 dockerd[1814]: time="2026-03-02T14:26:23.725862987Z" level=info msg="Loading containers: start." Mar 2 14:26:23.823710 kernel: Initializing XFRM netlink socket Mar 2 14:26:27.116720 systemd-networkd[1469]: docker0: Link UP Mar 2 14:26:27.163168 dockerd[1814]: time="2026-03-02T14:26:27.162641379Z" level=info msg="Loading containers: done." Mar 2 14:26:27.515751 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3925570808-merged.mount: Deactivated successfully. Mar 2 14:26:27.692617 dockerd[1814]: time="2026-03-02T14:26:27.691673971Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 2 14:26:27.692617 dockerd[1814]: time="2026-03-02T14:26:27.692512787Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 2 14:26:27.692617 dockerd[1814]: time="2026-03-02T14:26:27.692722880Z" level=info msg="Initializing buildkit" Mar 2 14:26:28.174952 dockerd[1814]: time="2026-03-02T14:26:28.173825947Z" level=info msg="Completed buildkit initialization" Mar 2 14:26:28.219447 dockerd[1814]: time="2026-03-02T14:26:28.218763402Z" level=info msg="Daemon has completed initialization" Mar 2 14:26:28.219447 dockerd[1814]: time="2026-03-02T14:26:28.219171135Z" level=info msg="API listen on /run/docker.sock" Mar 2 14:26:28.222916 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 2 14:26:29.911284 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 2 14:26:29.923648 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 14:26:31.517582 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 14:26:31.549742 (kubelet)[2042]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 14:26:31.844549 kubelet[2042]: E0302 14:26:31.843375 2042 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 14:26:31.860179 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 14:26:31.860487 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 14:26:31.861621 systemd[1]: kubelet.service: Consumed 903ms CPU time, 110.7M memory peak. Mar 2 14:26:32.557248 containerd[1557]: time="2026-03-02T14:26:32.555740833Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 2 14:26:33.813984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount375418864.mount: Deactivated successfully. Mar 2 14:26:41.885333 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 2 14:26:41.895696 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 14:26:41.975623 containerd[1557]: time="2026-03-02T14:26:41.975399577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:26:41.985581 containerd[1557]: time="2026-03-02T14:26:41.985540243Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=27696467" Mar 2 14:26:41.999126 containerd[1557]: time="2026-03-02T14:26:41.994387079Z" level=info msg="ImageCreate event name:\"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:26:42.025000 containerd[1557]: time="2026-03-02T14:26:42.024919544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:26:42.029361 containerd[1557]: time="2026-03-02T14:26:42.029323527Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"27693066\" in 9.473538865s" Mar 2 14:26:42.029783 containerd[1557]: time="2026-03-02T14:26:42.029471881Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\"" Mar 2 14:26:42.032000 containerd[1557]: time="2026-03-02T14:26:42.031968171Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 2 14:26:42.131574 update_engine[1537]: I20260302 14:26:42.130805 1537 update_attempter.cc:509] Updating boot flags... Mar 2 14:26:42.691376 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 14:26:42.726448 (kubelet)[2138]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 14:26:43.014885 kubelet[2138]: E0302 14:26:43.014418 2138 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 14:26:43.023590 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 14:26:43.024259 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 14:26:43.024923 systemd[1]: kubelet.service: Consumed 522ms CPU time, 108.9M memory peak. Mar 2 14:26:47.746679 containerd[1557]: time="2026-03-02T14:26:47.746501452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:26:47.751326 containerd[1557]: time="2026-03-02T14:26:47.751286502Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=21450700" Mar 2 14:26:47.755828 containerd[1557]: time="2026-03-02T14:26:47.755387617Z" level=info msg="ImageCreate event name:\"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:26:47.769831 containerd[1557]: time="2026-03-02T14:26:47.767835012Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:26:47.770444 containerd[1557]: time="2026-03-02T14:26:47.770303161Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"23142311\" in 5.737097388s" Mar 2 14:26:47.770444 containerd[1557]: time="2026-03-02T14:26:47.770343174Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\"" Mar 2 14:26:47.780456 containerd[1557]: time="2026-03-02T14:26:47.776684403Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 2 14:26:52.387331 containerd[1557]: time="2026-03-02T14:26:52.384471740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:26:52.393368 containerd[1557]: time="2026-03-02T14:26:52.391985867Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=15548429" Mar 2 14:26:52.399292 containerd[1557]: time="2026-03-02T14:26:52.399215648Z" level=info msg="ImageCreate event name:\"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:26:52.422117 containerd[1557]: time="2026-03-02T14:26:52.419402571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:26:52.422117 containerd[1557]: time="2026-03-02T14:26:52.420593944Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"17240058\" in 4.643870227s" Mar 2 14:26:52.422117 containerd[1557]: time="2026-03-02T14:26:52.420645710Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\"" Mar 2 14:26:52.425435 containerd[1557]: time="2026-03-02T14:26:52.425403391Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 2 14:26:53.135445 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 2 14:26:53.147544 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 14:26:54.023592 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 14:26:54.066853 (kubelet)[2165]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 14:26:54.361857 kubelet[2165]: E0302 14:26:54.361639 2165 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 14:26:54.367910 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 14:26:54.368449 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 14:26:54.369426 systemd[1]: kubelet.service: Consumed 535ms CPU time, 110.6M memory peak. Mar 2 14:26:56.235390 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3811788929.mount: Deactivated successfully. Mar 2 14:27:00.223419 containerd[1557]: time="2026-03-02T14:27:00.220541365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:27:00.229336 containerd[1557]: time="2026-03-02T14:27:00.229298760Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=25685312" Mar 2 14:27:00.240600 containerd[1557]: time="2026-03-02T14:27:00.240340797Z" level=info msg="ImageCreate event name:\"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:27:00.264218 containerd[1557]: time="2026-03-02T14:27:00.261432503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:27:00.269347 containerd[1557]: time="2026-03-02T14:27:00.267947919Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"25684331\" in 7.842416631s" Mar 2 14:27:00.269347 containerd[1557]: time="2026-03-02T14:27:00.267983406Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\"" Mar 2 14:27:00.276254 containerd[1557]: time="2026-03-02T14:27:00.276223890Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 2 14:27:01.485297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3767199332.mount: Deactivated successfully. Mar 2 14:27:04.387948 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 2 14:27:04.409275 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 14:27:05.060542 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 14:27:05.130347 (kubelet)[2245]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 14:27:05.593881 kubelet[2245]: E0302 14:27:05.588527 2245 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 14:27:05.600425 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 14:27:05.600796 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 14:27:05.601454 systemd[1]: kubelet.service: Consumed 475ms CPU time, 110.8M memory peak. Mar 2 14:27:12.107209 containerd[1557]: time="2026-03-02T14:27:12.101467772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:27:12.111521 containerd[1557]: time="2026-03-02T14:27:12.109960303Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23556542" Mar 2 14:27:12.121868 containerd[1557]: time="2026-03-02T14:27:12.121786468Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:27:12.146485 containerd[1557]: time="2026-03-02T14:27:12.146367771Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:27:12.156409 containerd[1557]: time="2026-03-02T14:27:12.154283176Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 11.877931318s" Mar 2 14:27:12.156409 containerd[1557]: time="2026-03-02T14:27:12.154337707Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Mar 2 14:27:12.161990 containerd[1557]: time="2026-03-02T14:27:12.161503011Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 2 14:27:13.263855 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount352256589.mount: Deactivated successfully. Mar 2 14:27:13.339410 containerd[1557]: time="2026-03-02T14:27:13.328895992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:27:13.350329 containerd[1557]: time="2026-03-02T14:27:13.349971112Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 2 14:27:13.357409 containerd[1557]: time="2026-03-02T14:27:13.357347322Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:27:13.370335 containerd[1557]: time="2026-03-02T14:27:13.369253152Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:27:13.373991 containerd[1557]: time="2026-03-02T14:27:13.373351781Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 1.211806642s" Mar 2 14:27:13.373991 containerd[1557]: time="2026-03-02T14:27:13.373389192Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 2 14:27:13.377231 containerd[1557]: time="2026-03-02T14:27:13.377206725Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 2 14:27:14.628335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3698250776.mount: Deactivated successfully. Mar 2 14:27:15.638799 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 2 14:27:15.651864 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 14:27:16.512319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 14:27:16.538566 (kubelet)[2278]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 14:27:16.863813 kubelet[2278]: E0302 14:27:16.860522 2278 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 14:27:16.881763 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 14:27:16.883391 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 14:27:16.886363 systemd[1]: kubelet.service: Consumed 549ms CPU time, 110.9M memory peak. Mar 2 14:27:24.292539 containerd[1557]: time="2026-03-02T14:27:24.289211840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:27:24.306414 containerd[1557]: time="2026-03-02T14:27:24.305753225Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23630322" Mar 2 14:27:24.313785 containerd[1557]: time="2026-03-02T14:27:24.310545323Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:27:24.336003 containerd[1557]: time="2026-03-02T14:27:24.332769357Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:27:24.336463 containerd[1557]: time="2026-03-02T14:27:24.336432202Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 10.959116784s" Mar 2 14:27:24.336579 containerd[1557]: time="2026-03-02T14:27:24.336557386Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Mar 2 14:27:26.893758 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Mar 2 14:27:26.903814 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 14:27:27.487643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 14:27:27.521908 (kubelet)[2368]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 14:27:27.698162 kubelet[2368]: E0302 14:27:27.697342 2368 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 14:27:27.707271 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 14:27:27.707508 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 14:27:27.708262 systemd[1]: kubelet.service: Consumed 397ms CPU time, 110.8M memory peak. Mar 2 14:27:28.068629 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 14:27:28.069975 systemd[1]: kubelet.service: Consumed 397ms CPU time, 110.8M memory peak. Mar 2 14:27:28.077919 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 14:27:28.180895 systemd[1]: Reload requested from client PID 2384 ('systemctl') (unit session-9.scope)... Mar 2 14:27:28.180983 systemd[1]: Reloading... Mar 2 14:27:28.407299 zram_generator::config[2425]: No configuration found. Mar 2 14:27:29.145885 systemd[1]: Reloading finished in 964 ms. Mar 2 14:27:29.334875 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 2 14:27:29.335618 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 2 14:27:29.339203 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 14:27:29.339269 systemd[1]: kubelet.service: Consumed 235ms CPU time, 98.1M memory peak. Mar 2 14:27:29.345842 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 14:27:29.972875 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 14:27:30.004383 (kubelet)[2474]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 2 14:27:30.243185 kubelet[2474]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 14:27:30.802930 kubelet[2474]: I0302 14:27:30.797645 2474 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 2 14:27:30.802930 kubelet[2474]: I0302 14:27:30.798446 2474 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 2 14:27:30.802930 kubelet[2474]: I0302 14:27:30.799926 2474 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 2 14:27:30.802930 kubelet[2474]: I0302 14:27:30.799941 2474 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 2 14:27:30.802930 kubelet[2474]: I0302 14:27:30.800459 2474 server.go:951] "Client rotation is on, will bootstrap in background" Mar 2 14:27:30.900159 kubelet[2474]: E0302 14:27:30.899601 2474 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.7:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 2 14:27:30.903566 kubelet[2474]: I0302 14:27:30.901832 2474 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 2 14:27:30.937915 kubelet[2474]: I0302 14:27:30.935533 2474 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 2 14:27:30.983253 kubelet[2474]: I0302 14:27:30.981944 2474 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 2 14:27:30.987768 kubelet[2474]: I0302 14:27:30.986246 2474 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 2 14:27:30.987768 kubelet[2474]: I0302 14:27:30.986299 2474 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 2 14:27:30.989325 kubelet[2474]: I0302 14:27:30.986657 2474 topology_manager.go:143] "Creating topology manager with none policy" Mar 2 14:27:30.989325 kubelet[2474]: I0302 14:27:30.988398 2474 container_manager_linux.go:308] "Creating device plugin manager" Mar 2 14:27:30.989325 kubelet[2474]: I0302 14:27:30.988545 2474 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 2 14:27:30.998798 kubelet[2474]: I0302 14:27:30.998506 2474 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 2 14:27:30.999182 kubelet[2474]: I0302 14:27:30.998974 2474 kubelet.go:482] "Attempting to sync node with API server" Mar 2 14:27:30.999182 kubelet[2474]: I0302 14:27:30.998996 2474 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 2 14:27:30.999182 kubelet[2474]: I0302 14:27:30.999161 2474 kubelet.go:394] "Adding apiserver pod source" Mar 2 14:27:30.999182 kubelet[2474]: I0302 14:27:30.999184 2474 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 2 14:27:31.013168 kubelet[2474]: I0302 14:27:31.011445 2474 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 2 14:27:31.021804 kubelet[2474]: I0302 14:27:31.020749 2474 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 2 14:27:31.021804 kubelet[2474]: I0302 14:27:31.020842 2474 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 2 14:27:31.021953 kubelet[2474]: W0302 14:27:31.021882 2474 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 2 14:27:31.040904 kubelet[2474]: I0302 14:27:31.040259 2474 server.go:1257] "Started kubelet" Mar 2 14:27:31.042952 kubelet[2474]: I0302 14:27:31.042840 2474 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 2 14:27:31.044490 kubelet[2474]: I0302 14:27:31.044186 2474 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 2 14:27:31.044490 kubelet[2474]: I0302 14:27:31.044386 2474 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 2 14:27:31.057763 kubelet[2474]: I0302 14:27:31.057506 2474 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 2 14:27:31.065916 kubelet[2474]: I0302 14:27:31.065333 2474 server.go:317] "Adding debug handlers to kubelet server" Mar 2 14:27:31.071624 kubelet[2474]: I0302 14:27:31.071599 2474 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 2 14:27:31.077408 kubelet[2474]: I0302 14:27:31.077376 2474 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 2 14:27:31.085131 kubelet[2474]: I0302 14:27:31.080895 2474 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 2 14:27:31.085131 kubelet[2474]: E0302 14:27:31.081613 2474 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 14:27:31.085131 kubelet[2474]: I0302 14:27:31.083542 2474 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 2 14:27:31.085131 kubelet[2474]: I0302 14:27:31.083838 2474 reconciler.go:29] "Reconciler: start to sync state" Mar 2 14:27:31.094157 kubelet[2474]: E0302 14:27:31.093971 2474 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="200ms" Mar 2 14:27:31.094655 kubelet[2474]: I0302 14:27:31.094562 2474 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 2 14:27:31.105631 kubelet[2474]: E0302 14:27:31.096492 2474 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.7:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.7:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18990c7b102be6e3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-02 14:27:31.038824163 +0000 UTC m=+1.020648710,LastTimestamp:2026-03-02 14:27:31.038824163 +0000 UTC m=+1.020648710,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 2 14:27:31.109212 kubelet[2474]: E0302 14:27:31.107997 2474 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 2 14:27:31.116260 kubelet[2474]: I0302 14:27:31.115477 2474 factory.go:223] Registration of the containerd container factory successfully Mar 2 14:27:31.116260 kubelet[2474]: I0302 14:27:31.115496 2474 factory.go:223] Registration of the systemd container factory successfully Mar 2 14:27:31.182197 kubelet[2474]: E0302 14:27:31.181761 2474 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 14:27:31.201810 kubelet[2474]: I0302 14:27:31.200184 2474 cpu_manager.go:225] "Starting" policy="none" Mar 2 14:27:31.201810 kubelet[2474]: I0302 14:27:31.200202 2474 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 2 14:27:31.201810 kubelet[2474]: I0302 14:27:31.200226 2474 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 2 14:27:31.208121 kubelet[2474]: I0302 14:27:31.207562 2474 policy_none.go:50] "Start" Mar 2 14:27:31.208121 kubelet[2474]: I0302 14:27:31.207768 2474 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 2 14:27:31.208121 kubelet[2474]: I0302 14:27:31.207850 2474 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 2 14:27:31.220488 kubelet[2474]: I0302 14:27:31.220212 2474 policy_none.go:44] "Start" Mar 2 14:27:31.241596 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 2 14:27:31.244307 kubelet[2474]: I0302 14:27:31.243999 2474 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 2 14:27:31.257899 kubelet[2474]: I0302 14:27:31.256420 2474 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 2 14:27:31.257899 kubelet[2474]: I0302 14:27:31.256835 2474 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 2 14:27:31.257899 kubelet[2474]: I0302 14:27:31.256877 2474 kubelet.go:2501] "Starting kubelet main sync loop" Mar 2 14:27:31.257899 kubelet[2474]: E0302 14:27:31.256956 2474 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 2 14:27:31.282930 kubelet[2474]: E0302 14:27:31.282248 2474 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 14:27:31.295907 kubelet[2474]: E0302 14:27:31.295751 2474 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="400ms" Mar 2 14:27:31.299146 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 2 14:27:31.311498 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 2 14:27:31.335976 kubelet[2474]: E0302 14:27:31.334634 2474 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 2 14:27:31.335976 kubelet[2474]: I0302 14:27:31.335137 2474 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 2 14:27:31.335976 kubelet[2474]: I0302 14:27:31.335154 2474 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 2 14:27:31.340287 kubelet[2474]: I0302 14:27:31.337754 2474 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 2 14:27:31.342879 kubelet[2474]: E0302 14:27:31.342769 2474 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 2 14:27:31.342879 kubelet[2474]: E0302 14:27:31.342816 2474 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 2 14:27:31.395560 kubelet[2474]: I0302 14:27:31.394645 2474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 14:27:31.395560 kubelet[2474]: I0302 14:27:31.394772 2474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 14:27:31.395560 kubelet[2474]: I0302 14:27:31.394825 2474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 14:27:31.395560 kubelet[2474]: I0302 14:27:31.394856 2474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/250a41d7e51f37f55d22c66e3e26abd8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"250a41d7e51f37f55d22c66e3e26abd8\") " pod="kube-system/kube-apiserver-localhost" Mar 2 14:27:31.395560 kubelet[2474]: I0302 14:27:31.394879 2474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/250a41d7e51f37f55d22c66e3e26abd8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"250a41d7e51f37f55d22c66e3e26abd8\") " pod="kube-system/kube-apiserver-localhost" Mar 2 14:27:31.396217 kubelet[2474]: I0302 14:27:31.394897 2474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/250a41d7e51f37f55d22c66e3e26abd8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"250a41d7e51f37f55d22c66e3e26abd8\") " pod="kube-system/kube-apiserver-localhost" Mar 2 14:27:31.396217 kubelet[2474]: I0302 14:27:31.394919 2474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 14:27:31.396217 kubelet[2474]: I0302 14:27:31.394941 2474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 14:27:31.441940 systemd[1]: Created slice kubepods-burstable-pod250a41d7e51f37f55d22c66e3e26abd8.slice - libcontainer container kubepods-burstable-pod250a41d7e51f37f55d22c66e3e26abd8.slice. Mar 2 14:27:31.447959 kubelet[2474]: I0302 14:27:31.446936 2474 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 2 14:27:31.447959 kubelet[2474]: E0302 14:27:31.447908 2474 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Mar 2 14:27:31.473157 kubelet[2474]: E0302 14:27:31.472976 2474 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 14:27:31.496280 kubelet[2474]: I0302 14:27:31.496244 2474 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 2 14:27:31.498947 systemd[1]: Created slice kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice - libcontainer container kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice. Mar 2 14:27:31.536914 kubelet[2474]: E0302 14:27:31.534932 2474 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 14:27:31.547879 kubelet[2474]: E0302 14:27:31.545871 2474 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:27:31.550131 containerd[1557]: time="2026-03-02T14:27:31.549554748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,}" Mar 2 14:27:31.550381 systemd[1]: Created slice kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice - libcontainer container kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice. Mar 2 14:27:31.570140 kubelet[2474]: E0302 14:27:31.565289 2474 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 14:27:31.650225 kubelet[2474]: I0302 14:27:31.649864 2474 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 2 14:27:31.650628 kubelet[2474]: E0302 14:27:31.650368 2474 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Mar 2 14:27:31.698965 kubelet[2474]: E0302 14:27:31.698599 2474 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="800ms" Mar 2 14:27:31.801946 kubelet[2474]: E0302 14:27:31.801405 2474 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:27:31.808586 containerd[1557]: time="2026-03-02T14:27:31.808302802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:250a41d7e51f37f55d22c66e3e26abd8,Namespace:kube-system,Attempt:0,}" Mar 2 14:27:31.885947 kubelet[2474]: E0302 14:27:31.883277 2474 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:27:31.887451 containerd[1557]: time="2026-03-02T14:27:31.887279720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,}" Mar 2 14:27:32.055241 kubelet[2474]: I0302 14:27:32.053917 2474 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 2 14:27:32.055241 kubelet[2474]: E0302 14:27:32.054635 2474 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Mar 2 14:27:32.508747 kubelet[2474]: E0302 14:27:32.508227 2474 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="1.6s" Mar 2 14:27:32.574539 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2706535083.mount: Deactivated successfully. Mar 2 14:27:32.619623 containerd[1557]: time="2026-03-02T14:27:32.619310552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 14:27:32.643449 containerd[1557]: time="2026-03-02T14:27:32.642871664Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 2 14:27:32.652743 containerd[1557]: time="2026-03-02T14:27:32.652347883Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 14:27:32.661579 containerd[1557]: time="2026-03-02T14:27:32.661247878Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 14:27:32.671289 containerd[1557]: time="2026-03-02T14:27:32.669513983Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 2 14:27:32.695910 containerd[1557]: time="2026-03-02T14:27:32.695551732Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 14:27:32.703001 containerd[1557]: time="2026-03-02T14:27:32.702617060Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 2 14:27:32.707218 containerd[1557]: time="2026-03-02T14:27:32.706871042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 14:27:32.714185 containerd[1557]: time="2026-03-02T14:27:32.709587827Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.141282191s" Mar 2 14:27:32.714185 containerd[1557]: time="2026-03-02T14:27:32.713536957Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 897.175849ms" Mar 2 14:27:32.793611 containerd[1557]: time="2026-03-02T14:27:32.793366828Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 874.701271ms" Mar 2 14:27:32.829977 containerd[1557]: time="2026-03-02T14:27:32.828952881Z" level=info msg="connecting to shim dcbf2a9d0d5b8370b6bfa3731d92612346009134dc1dd7a0038fd5f690ea3f01" address="unix:///run/containerd/s/15337241d53fae0cd00415da7a17494b41c72d76e336cf60976af3d4a5056a3e" namespace=k8s.io protocol=ttrpc version=3 Mar 2 14:27:32.868955 kubelet[2474]: I0302 14:27:32.867005 2474 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 2 14:27:32.868955 kubelet[2474]: E0302 14:27:32.867508 2474 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Mar 2 14:27:32.905988 containerd[1557]: time="2026-03-02T14:27:32.905875685Z" level=info msg="connecting to shim 69d735b028232b28ef955f790a40675e0955a15045aebd9bb040b2a8c19808b7" address="unix:///run/containerd/s/ddf0fda1a75725a7c3207ef04d623eff91807eda3438deffa26d74a19e7fb8bd" namespace=k8s.io protocol=ttrpc version=3 Mar 2 14:27:32.955654 containerd[1557]: time="2026-03-02T14:27:32.953882575Z" level=info msg="connecting to shim 0c3a28bfcf043aa5eb279ab8cbc218691858c9d0bc8003f21a9f8e235241ae00" address="unix:///run/containerd/s/4f631d6a7ec367d9d4710d4da303b1df5ea39a10a8d3dfcbff3dab6469f9d5e8" namespace=k8s.io protocol=ttrpc version=3 Mar 2 14:27:32.990264 systemd[1]: Started cri-containerd-dcbf2a9d0d5b8370b6bfa3731d92612346009134dc1dd7a0038fd5f690ea3f01.scope - libcontainer container dcbf2a9d0d5b8370b6bfa3731d92612346009134dc1dd7a0038fd5f690ea3f01. Mar 2 14:27:33.036197 kubelet[2474]: E0302 14:27:33.035985 2474 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.7:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 2 14:27:33.093607 systemd[1]: Started cri-containerd-69d735b028232b28ef955f790a40675e0955a15045aebd9bb040b2a8c19808b7.scope - libcontainer container 69d735b028232b28ef955f790a40675e0955a15045aebd9bb040b2a8c19808b7. Mar 2 14:27:33.134589 systemd[1]: Started cri-containerd-0c3a28bfcf043aa5eb279ab8cbc218691858c9d0bc8003f21a9f8e235241ae00.scope - libcontainer container 0c3a28bfcf043aa5eb279ab8cbc218691858c9d0bc8003f21a9f8e235241ae00. Mar 2 14:27:33.330589 containerd[1557]: time="2026-03-02T14:27:33.330546157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,} returns sandbox id \"dcbf2a9d0d5b8370b6bfa3731d92612346009134dc1dd7a0038fd5f690ea3f01\"" Mar 2 14:27:33.336141 kubelet[2474]: E0302 14:27:33.335971 2474 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:27:33.371257 containerd[1557]: time="2026-03-02T14:27:33.370295002Z" level=info msg="CreateContainer within sandbox \"dcbf2a9d0d5b8370b6bfa3731d92612346009134dc1dd7a0038fd5f690ea3f01\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 2 14:27:33.441229 containerd[1557]: time="2026-03-02T14:27:33.440811767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:250a41d7e51f37f55d22c66e3e26abd8,Namespace:kube-system,Attempt:0,} returns sandbox id \"69d735b028232b28ef955f790a40675e0955a15045aebd9bb040b2a8c19808b7\"" Mar 2 14:27:33.442755 kubelet[2474]: E0302 14:27:33.442276 2474 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:27:33.463181 containerd[1557]: time="2026-03-02T14:27:33.461783982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c3a28bfcf043aa5eb279ab8cbc218691858c9d0bc8003f21a9f8e235241ae00\"" Mar 2 14:27:33.465863 kubelet[2474]: E0302 14:27:33.465474 2474 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:27:33.466598 containerd[1557]: time="2026-03-02T14:27:33.466500296Z" level=info msg="CreateContainer within sandbox \"69d735b028232b28ef955f790a40675e0955a15045aebd9bb040b2a8c19808b7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 2 14:27:33.466843 containerd[1557]: time="2026-03-02T14:27:33.466603499Z" level=info msg="Container baf45ec90b2b232c8cae0808b8b5a99986ffb0cca9e83ccbf5cf182486726cc5: CDI devices from CRI Config.CDIDevices: []" Mar 2 14:27:33.476988 containerd[1557]: time="2026-03-02T14:27:33.476612275Z" level=info msg="CreateContainer within sandbox \"0c3a28bfcf043aa5eb279ab8cbc218691858c9d0bc8003f21a9f8e235241ae00\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 2 14:27:33.514951 containerd[1557]: time="2026-03-02T14:27:33.514759174Z" level=info msg="Container 23e6d9004854bbfa831875c970d768d162a36a0589b7c493726a1bc23a8d8504: CDI devices from CRI Config.CDIDevices: []" Mar 2 14:27:33.526406 containerd[1557]: time="2026-03-02T14:27:33.526226337Z" level=info msg="CreateContainer within sandbox \"dcbf2a9d0d5b8370b6bfa3731d92612346009134dc1dd7a0038fd5f690ea3f01\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"baf45ec90b2b232c8cae0808b8b5a99986ffb0cca9e83ccbf5cf182486726cc5\"" Mar 2 14:27:33.534146 containerd[1557]: time="2026-03-02T14:27:33.531571826Z" level=info msg="StartContainer for \"baf45ec90b2b232c8cae0808b8b5a99986ffb0cca9e83ccbf5cf182486726cc5\"" Mar 2 14:27:33.540271 containerd[1557]: time="2026-03-02T14:27:33.538637567Z" level=info msg="connecting to shim baf45ec90b2b232c8cae0808b8b5a99986ffb0cca9e83ccbf5cf182486726cc5" address="unix:///run/containerd/s/15337241d53fae0cd00415da7a17494b41c72d76e336cf60976af3d4a5056a3e" protocol=ttrpc version=3 Mar 2 14:27:33.566161 kubelet[2474]: E0302 14:27:33.565334 2474 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.7:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.7:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18990c7b102be6e3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-02 14:27:31.038824163 +0000 UTC m=+1.020648710,LastTimestamp:2026-03-02 14:27:31.038824163 +0000 UTC m=+1.020648710,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 2 14:27:33.581215 containerd[1557]: time="2026-03-02T14:27:33.580952654Z" level=info msg="CreateContainer within sandbox \"69d735b028232b28ef955f790a40675e0955a15045aebd9bb040b2a8c19808b7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"23e6d9004854bbfa831875c970d768d162a36a0589b7c493726a1bc23a8d8504\"" Mar 2 14:27:33.584302 containerd[1557]: time="2026-03-02T14:27:33.584159479Z" level=info msg="StartContainer for \"23e6d9004854bbfa831875c970d768d162a36a0589b7c493726a1bc23a8d8504\"" Mar 2 14:27:33.598878 containerd[1557]: time="2026-03-02T14:27:33.593393230Z" level=info msg="connecting to shim 23e6d9004854bbfa831875c970d768d162a36a0589b7c493726a1bc23a8d8504" address="unix:///run/containerd/s/ddf0fda1a75725a7c3207ef04d623eff91807eda3438deffa26d74a19e7fb8bd" protocol=ttrpc version=3 Mar 2 14:27:33.628224 containerd[1557]: time="2026-03-02T14:27:33.626301395Z" level=info msg="Container 44ec253baaf4d3170637fd4d3ea8794bbbfe648529987f8661dc1ce4ed5ae9ea: CDI devices from CRI Config.CDIDevices: []" Mar 2 14:27:33.686367 containerd[1557]: time="2026-03-02T14:27:33.686252094Z" level=info msg="CreateContainer within sandbox \"0c3a28bfcf043aa5eb279ab8cbc218691858c9d0bc8003f21a9f8e235241ae00\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"44ec253baaf4d3170637fd4d3ea8794bbbfe648529987f8661dc1ce4ed5ae9ea\"" Mar 2 14:27:33.688562 containerd[1557]: time="2026-03-02T14:27:33.688459967Z" level=info msg="StartContainer for \"44ec253baaf4d3170637fd4d3ea8794bbbfe648529987f8661dc1ce4ed5ae9ea\"" Mar 2 14:27:33.718343 systemd[1]: Started cri-containerd-23e6d9004854bbfa831875c970d768d162a36a0589b7c493726a1bc23a8d8504.scope - libcontainer container 23e6d9004854bbfa831875c970d768d162a36a0589b7c493726a1bc23a8d8504. Mar 2 14:27:33.719411 containerd[1557]: time="2026-03-02T14:27:33.718944705Z" level=info msg="connecting to shim 44ec253baaf4d3170637fd4d3ea8794bbbfe648529987f8661dc1ce4ed5ae9ea" address="unix:///run/containerd/s/4f631d6a7ec367d9d4710d4da303b1df5ea39a10a8d3dfcbff3dab6469f9d5e8" protocol=ttrpc version=3 Mar 2 14:27:33.747410 systemd[1]: Started cri-containerd-baf45ec90b2b232c8cae0808b8b5a99986ffb0cca9e83ccbf5cf182486726cc5.scope - libcontainer container baf45ec90b2b232c8cae0808b8b5a99986ffb0cca9e83ccbf5cf182486726cc5. Mar 2 14:27:33.844566 systemd[1]: Started cri-containerd-44ec253baaf4d3170637fd4d3ea8794bbbfe648529987f8661dc1ce4ed5ae9ea.scope - libcontainer container 44ec253baaf4d3170637fd4d3ea8794bbbfe648529987f8661dc1ce4ed5ae9ea. Mar 2 14:27:34.017142 containerd[1557]: time="2026-03-02T14:27:34.016427706Z" level=info msg="StartContainer for \"23e6d9004854bbfa831875c970d768d162a36a0589b7c493726a1bc23a8d8504\" returns successfully" Mar 2 14:27:34.081202 containerd[1557]: time="2026-03-02T14:27:34.080630017Z" level=info msg="StartContainer for \"baf45ec90b2b232c8cae0808b8b5a99986ffb0cca9e83ccbf5cf182486726cc5\" returns successfully" Mar 2 14:27:34.108880 containerd[1557]: time="2026-03-02T14:27:34.106971285Z" level=info msg="StartContainer for \"44ec253baaf4d3170637fd4d3ea8794bbbfe648529987f8661dc1ce4ed5ae9ea\" returns successfully" Mar 2 14:27:34.113252 kubelet[2474]: E0302 14:27:34.111409 2474 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="3.2s" Mar 2 14:27:34.346176 kubelet[2474]: E0302 14:27:34.342577 2474 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 14:27:34.351918 kubelet[2474]: E0302 14:27:34.347771 2474 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:27:34.391411 kubelet[2474]: E0302 14:27:34.391286 2474 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 14:27:34.391648 kubelet[2474]: E0302 14:27:34.391551 2474 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:27:34.414301 kubelet[2474]: E0302 14:27:34.413849 2474 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 14:27:34.414301 kubelet[2474]: E0302 14:27:34.414195 2474 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:27:34.480751 kubelet[2474]: I0302 14:27:34.475831 2474 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 2 14:27:35.423503 kubelet[2474]: E0302 14:27:35.423247 2474 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 14:27:35.423503 kubelet[2474]: E0302 14:27:35.423465 2474 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:27:35.423503 kubelet[2474]: E0302 14:27:35.423477 2474 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 14:27:35.423503 kubelet[2474]: E0302 14:27:35.423590 2474 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:27:36.436264 kubelet[2474]: E0302 14:27:36.435168 2474 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 14:27:36.436264 kubelet[2474]: E0302 14:27:36.435576 2474 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:27:38.708796 kubelet[2474]: E0302 14:27:38.708737 2474 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 2 14:27:38.871558 kubelet[2474]: I0302 14:27:38.871411 2474 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 2 14:27:38.871558 kubelet[2474]: E0302 14:27:38.871453 2474 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 2 14:27:38.886408 kubelet[2474]: I0302 14:27:38.885857 2474 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 2 14:27:38.980600 kubelet[2474]: E0302 14:27:38.980482 2474 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 2 14:27:38.985227 kubelet[2474]: I0302 14:27:38.982774 2474 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 2 14:27:38.997143 kubelet[2474]: E0302 14:27:38.996906 2474 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 2 14:27:38.997143 kubelet[2474]: I0302 14:27:38.996946 2474 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 2 14:27:39.010858 kubelet[2474]: I0302 14:27:39.010832 2474 apiserver.go:52] "Watching apiserver" Mar 2 14:27:39.031813 kubelet[2474]: E0302 14:27:39.031002 2474 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 2 14:27:39.189949 kubelet[2474]: I0302 14:27:39.188514 2474 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 2 14:27:40.592956 kubelet[2474]: I0302 14:27:40.589865 2474 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 2 14:27:40.644523 kubelet[2474]: E0302 14:27:40.644355 2474 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:27:41.511561 kubelet[2474]: I0302 14:27:41.511316 2474 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.511298378 podStartE2EDuration="1.511298378s" podCreationTimestamp="2026-03-02 14:27:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 14:27:41.466482191 +0000 UTC m=+11.448306748" watchObservedRunningTime="2026-03-02 14:27:41.511298378 +0000 UTC m=+11.493122924" Mar 2 14:27:41.636971 kubelet[2474]: E0302 14:27:41.636934 2474 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:27:41.978717 kubelet[2474]: I0302 14:27:41.978435 2474 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 2 14:27:42.052968 kubelet[2474]: E0302 14:27:42.048973 2474 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:27:42.667491 kubelet[2474]: E0302 14:27:42.666541 2474 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:27:44.004431 kubelet[2474]: I0302 14:27:43.999382 2474 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 2 14:27:44.069250 kubelet[2474]: E0302 14:27:44.067361 2474 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:27:44.681982 kubelet[2474]: E0302 14:27:44.680841 2474 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:27:45.559981 systemd[1]: Reload requested from client PID 2775 ('systemctl') (unit session-9.scope)... Mar 2 14:27:45.562784 systemd[1]: Reloading... Mar 2 14:27:46.272838 zram_generator::config[2818]: No configuration found. Mar 2 14:27:49.225176 systemd[1]: Reloading finished in 3659 ms. Mar 2 14:27:49.647275 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 14:27:49.928433 systemd[1]: kubelet.service: Deactivated successfully. Mar 2 14:27:49.935456 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 14:27:49.935665 systemd[1]: kubelet.service: Consumed 2.838s CPU time, 129.6M memory peak. Mar 2 14:27:50.042713 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 14:27:52.918226 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 14:27:52.971346 (kubelet)[2864]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 2 14:27:54.530389 kubelet[2864]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 14:27:54.858972 kubelet[2864]: I0302 14:27:54.838204 2864 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 2 14:27:54.878839 kubelet[2864]: I0302 14:27:54.877957 2864 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 2 14:27:54.878839 kubelet[2864]: I0302 14:27:54.878240 2864 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 2 14:27:54.878839 kubelet[2864]: I0302 14:27:54.878261 2864 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 2 14:27:54.914902 kubelet[2864]: I0302 14:27:54.914871 2864 server.go:951] "Client rotation is on, will bootstrap in background" Mar 2 14:27:54.933179 kubelet[2864]: I0302 14:27:54.932934 2864 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 2 14:27:54.959928 kubelet[2864]: I0302 14:27:54.958538 2864 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 2 14:27:56.505919 kubelet[2864]: I0302 14:27:56.505387 2864 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 2 14:27:56.839518 kubelet[2864]: I0302 14:27:56.838343 2864 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 2 14:27:56.859385 kubelet[2864]: I0302 14:27:56.859322 2864 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 2 14:27:56.874874 kubelet[2864]: I0302 14:27:56.860295 2864 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 2 14:27:56.876938 kubelet[2864]: I0302 14:27:56.876910 2864 topology_manager.go:143] "Creating topology manager with none policy" Mar 2 14:27:56.877201 kubelet[2864]: I0302 14:27:56.877187 2864 container_manager_linux.go:308] "Creating device plugin manager" Mar 2 14:27:56.877473 kubelet[2864]: I0302 14:27:56.877453 2864 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 2 14:27:56.877943 kubelet[2864]: I0302 14:27:56.877926 2864 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 2 14:27:56.900353 kubelet[2864]: I0302 14:27:56.899974 2864 kubelet.go:482] "Attempting to sync node with API server" Mar 2 14:27:56.908199 kubelet[2864]: I0302 14:27:56.908170 2864 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 2 14:27:56.929849 kubelet[2864]: I0302 14:27:56.929524 2864 kubelet.go:394] "Adding apiserver pod source" Mar 2 14:27:56.936808 kubelet[2864]: I0302 14:27:56.933796 2864 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 2 14:27:57.138189 kubelet[2864]: I0302 14:27:57.134419 2864 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 2 14:27:57.154755 kubelet[2864]: I0302 14:27:57.151271 2864 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 2 14:27:57.154755 kubelet[2864]: I0302 14:27:57.151318 2864 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 2 14:27:57.335517 kubelet[2864]: I0302 14:27:57.331615 2864 server.go:1257] "Started kubelet" Mar 2 14:27:57.335517 kubelet[2864]: I0302 14:27:57.331785 2864 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 2 14:27:57.362484 kubelet[2864]: I0302 14:27:57.346952 2864 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 2 14:27:57.362484 kubelet[2864]: I0302 14:27:57.347222 2864 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 2 14:27:57.362484 kubelet[2864]: I0302 14:27:57.345004 2864 server.go:317] "Adding debug handlers to kubelet server" Mar 2 14:27:57.371474 kubelet[2864]: I0302 14:27:57.371447 2864 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 2 14:27:57.432515 kubelet[2864]: I0302 14:27:57.415746 2864 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 2 14:27:57.458231 kubelet[2864]: I0302 14:27:57.439285 2864 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 2 14:27:57.458231 kubelet[2864]: I0302 14:27:57.439854 2864 reconciler.go:29] "Reconciler: start to sync state" Mar 2 14:27:57.458231 kubelet[2864]: I0302 14:27:57.440758 2864 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 2 14:27:57.462522 kubelet[2864]: I0302 14:27:57.454467 2864 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 2 14:27:57.549982 kubelet[2864]: I0302 14:27:57.542007 2864 factory.go:223] Registration of the systemd container factory successfully Mar 2 14:27:57.645486 kubelet[2864]: I0302 14:27:57.644271 2864 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 2 14:27:57.751813 kubelet[2864]: I0302 14:27:57.751519 2864 factory.go:223] Registration of the containerd container factory successfully Mar 2 14:27:57.972897 kubelet[2864]: I0302 14:27:57.949331 2864 apiserver.go:52] "Watching apiserver" Mar 2 14:27:58.345486 kubelet[2864]: I0302 14:27:58.345256 2864 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 2 14:27:58.447248 kubelet[2864]: I0302 14:27:58.446199 2864 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 2 14:27:58.447248 kubelet[2864]: I0302 14:27:58.446303 2864 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 2 14:27:58.447248 kubelet[2864]: I0302 14:27:58.446490 2864 kubelet.go:2501] "Starting kubelet main sync loop" Mar 2 14:27:58.447248 kubelet[2864]: E0302 14:27:58.446798 2864 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 2 14:27:58.551460 kubelet[2864]: E0302 14:27:58.551365 2864 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 2 14:27:58.762821 kubelet[2864]: E0302 14:27:58.758834 2864 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 2 14:27:59.197169 kubelet[2864]: E0302 14:27:59.175849 2864 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 2 14:27:59.729915 kubelet[2864]: I0302 14:27:59.727780 2864 cpu_manager.go:225] "Starting" policy="none" Mar 2 14:27:59.806402 kubelet[2864]: I0302 14:27:59.761938 2864 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 2 14:27:59.806402 kubelet[2864]: I0302 14:27:59.801355 2864 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 2 14:27:59.820894 kubelet[2864]: I0302 14:27:59.810519 2864 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 2 14:27:59.832193 kubelet[2864]: I0302 14:27:59.826964 2864 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 2 14:27:59.832193 kubelet[2864]: I0302 14:27:59.832004 2864 policy_none.go:50] "Start" Mar 2 14:27:59.832193 kubelet[2864]: I0302 14:27:59.832240 2864 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 2 14:27:59.832193 kubelet[2864]: I0302 14:27:59.832338 2864 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 2 14:27:59.842277 kubelet[2864]: I0302 14:27:59.837307 2864 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 2 14:27:59.842277 kubelet[2864]: I0302 14:27:59.837325 2864 policy_none.go:44] "Start" Mar 2 14:28:00.017976 kubelet[2864]: E0302 14:28:00.015632 2864 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 2 14:28:00.076911 kubelet[2864]: E0302 14:28:00.076228 2864 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 2 14:28:00.076911 kubelet[2864]: I0302 14:28:00.076686 2864 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 2 14:28:00.076911 kubelet[2864]: I0302 14:28:00.076710 2864 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 2 14:28:00.099441 kubelet[2864]: I0302 14:28:00.093923 2864 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 2 14:28:00.138881 kubelet[2864]: I0302 14:28:00.127290 2864 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 2 14:28:00.147937 containerd[1557]: time="2026-03-02T14:28:00.142755975Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 2 14:28:00.205991 kubelet[2864]: I0302 14:28:00.144516 2864 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 2 14:28:00.205991 kubelet[2864]: E0302 14:28:00.170835 2864 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 2 14:28:00.695716 kubelet[2864]: I0302 14:28:00.675766 2864 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 2 14:28:00.974005 kubelet[2864]: I0302 14:28:00.965901 2864 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Mar 2 14:28:01.011416 kubelet[2864]: I0302 14:28:01.010226 2864 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 2 14:28:01.635376 kubelet[2864]: I0302 14:28:01.630762 2864 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 2 14:28:01.685458 kubelet[2864]: I0302 14:28:01.681168 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/37e80e8f-756f-4bc5-81af-9b7a5216019c-kube-proxy\") pod \"kube-proxy-cxdmr\" (UID: \"37e80e8f-756f-4bc5-81af-9b7a5216019c\") " pod="kube-system/kube-proxy-cxdmr" Mar 2 14:28:01.685458 kubelet[2864]: I0302 14:28:01.684309 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/250a41d7e51f37f55d22c66e3e26abd8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"250a41d7e51f37f55d22c66e3e26abd8\") " pod="kube-system/kube-apiserver-localhost" Mar 2 14:28:01.706624 kubelet[2864]: I0302 14:28:01.696801 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 14:28:01.706624 kubelet[2864]: I0302 14:28:01.696935 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/37e80e8f-756f-4bc5-81af-9b7a5216019c-xtables-lock\") pod \"kube-proxy-cxdmr\" (UID: \"37e80e8f-756f-4bc5-81af-9b7a5216019c\") " pod="kube-system/kube-proxy-cxdmr" Mar 2 14:28:01.706624 kubelet[2864]: I0302 14:28:01.696990 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/37e80e8f-756f-4bc5-81af-9b7a5216019c-lib-modules\") pod \"kube-proxy-cxdmr\" (UID: \"37e80e8f-756f-4bc5-81af-9b7a5216019c\") " pod="kube-system/kube-proxy-cxdmr" Mar 2 14:28:01.727245 kubelet[2864]: I0302 14:28:01.725371 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjdxk\" (UniqueName: \"kubernetes.io/projected/37e80e8f-756f-4bc5-81af-9b7a5216019c-kube-api-access-gjdxk\") pod \"kube-proxy-cxdmr\" (UID: \"37e80e8f-756f-4bc5-81af-9b7a5216019c\") " pod="kube-system/kube-proxy-cxdmr" Mar 2 14:28:01.727245 kubelet[2864]: I0302 14:28:01.725424 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/250a41d7e51f37f55d22c66e3e26abd8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"250a41d7e51f37f55d22c66e3e26abd8\") " pod="kube-system/kube-apiserver-localhost" Mar 2 14:28:01.727245 kubelet[2864]: I0302 14:28:01.725456 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/250a41d7e51f37f55d22c66e3e26abd8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"250a41d7e51f37f55d22c66e3e26abd8\") " pod="kube-system/kube-apiserver-localhost" Mar 2 14:28:01.727245 kubelet[2864]: I0302 14:28:01.725477 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 14:28:01.727245 kubelet[2864]: I0302 14:28:01.725500 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 14:28:01.755808 kubelet[2864]: I0302 14:28:01.732490 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 14:28:01.755808 kubelet[2864]: I0302 14:28:01.732722 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 14:28:01.755808 kubelet[2864]: I0302 14:28:01.732752 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 2 14:28:01.756492 kubelet[2864]: I0302 14:28:01.756461 2864 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 2 14:28:01.878857 kubelet[2864]: E0302 14:28:01.876276 2864 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 2 14:28:01.878857 kubelet[2864]: E0302 14:28:01.877969 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:28:01.951192 kubelet[2864]: E0302 14:28:01.950894 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:28:01.979417 kubelet[2864]: E0302 14:28:01.979273 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:28:02.043326 systemd[1]: Created slice kubepods-besteffort-pod37e80e8f_756f_4bc5_81af_9b7a5216019c.slice - libcontainer container kubepods-besteffort-pod37e80e8f_756f_4bc5_81af_9b7a5216019c.slice. Mar 2 14:28:02.548387 kubelet[2864]: E0302 14:28:02.522899 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:28:02.654747 kubelet[2864]: I0302 14:28:02.652925 2864 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=18.652909076 podStartE2EDuration="18.652909076s" podCreationTimestamp="2026-03-02 14:27:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 14:28:02.317271503 +0000 UTC m=+9.294426630" watchObservedRunningTime="2026-03-02 14:28:02.652909076 +0000 UTC m=+9.630064192" Mar 2 14:28:02.668235 kubelet[2864]: I0302 14:28:02.662858 2864 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=21.662829548 podStartE2EDuration="21.662829548s" podCreationTimestamp="2026-03-02 14:27:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 14:28:02.652703453 +0000 UTC m=+9.629858570" watchObservedRunningTime="2026-03-02 14:28:02.662829548 +0000 UTC m=+9.639984675" Mar 2 14:28:04.023267 containerd[1557]: time="2026-03-02T14:28:04.021921871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cxdmr,Uid:37e80e8f-756f-4bc5-81af-9b7a5216019c,Namespace:kube-system,Attempt:0,}" Mar 2 14:28:04.063452 kubelet[2864]: E0302 14:28:04.036270 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:28:04.063452 kubelet[2864]: E0302 14:28:04.041731 2864 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.584s" Mar 2 14:28:04.063452 kubelet[2864]: E0302 14:28:04.051706 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:28:04.111254 kubelet[2864]: E0302 14:28:04.093713 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:28:04.734470 containerd[1557]: time="2026-03-02T14:28:04.734405643Z" level=info msg="connecting to shim 3c1425edf5341fac0b55b6ca889be4d90103d158549ffd3a35b3783be884249c" address="unix:///run/containerd/s/741221725027513160b1455e1fa9d89bb65c63b99673c086f03bcdc304d59b23" namespace=k8s.io protocol=ttrpc version=3 Mar 2 14:28:05.052364 kubelet[2864]: E0302 14:28:05.045415 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:28:05.068281 kubelet[2864]: E0302 14:28:05.065472 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:28:05.227906 systemd[1]: Started cri-containerd-3c1425edf5341fac0b55b6ca889be4d90103d158549ffd3a35b3783be884249c.scope - libcontainer container 3c1425edf5341fac0b55b6ca889be4d90103d158549ffd3a35b3783be884249c. Mar 2 14:28:05.831375 systemd[1]: Created slice kubepods-besteffort-pod90425291_1c64_4651_bd4f_9da0a3ec71c1.slice - libcontainer container kubepods-besteffort-pod90425291_1c64_4651_bd4f_9da0a3ec71c1.slice. Mar 2 14:28:05.931902 kubelet[2864]: I0302 14:28:05.928365 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/90425291-1c64-4651-bd4f-9da0a3ec71c1-var-lib-calico\") pod \"tigera-operator-6447996989-5d5xn\" (UID: \"90425291-1c64-4651-bd4f-9da0a3ec71c1\") " pod="tigera-operator/tigera-operator-6447996989-5d5xn" Mar 2 14:28:05.931902 kubelet[2864]: I0302 14:28:05.928768 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrv6q\" (UniqueName: \"kubernetes.io/projected/90425291-1c64-4651-bd4f-9da0a3ec71c1-kube-api-access-vrv6q\") pod \"tigera-operator-6447996989-5d5xn\" (UID: \"90425291-1c64-4651-bd4f-9da0a3ec71c1\") " pod="tigera-operator/tigera-operator-6447996989-5d5xn" Mar 2 14:28:06.304971 containerd[1557]: time="2026-03-02T14:28:06.304862229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cxdmr,Uid:37e80e8f-756f-4bc5-81af-9b7a5216019c,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c1425edf5341fac0b55b6ca889be4d90103d158549ffd3a35b3783be884249c\"" Mar 2 14:28:06.306278 kubelet[2864]: E0302 14:28:06.305922 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:28:06.337834 containerd[1557]: time="2026-03-02T14:28:06.337787627Z" level=info msg="CreateContainer within sandbox \"3c1425edf5341fac0b55b6ca889be4d90103d158549ffd3a35b3783be884249c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 2 14:28:06.473671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1851068830.mount: Deactivated successfully. Mar 2 14:28:06.544429 containerd[1557]: time="2026-03-02T14:28:06.543970967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6447996989-5d5xn,Uid:90425291-1c64-4651-bd4f-9da0a3ec71c1,Namespace:tigera-operator,Attempt:0,}" Mar 2 14:28:06.564851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3326436017.mount: Deactivated successfully. Mar 2 14:28:06.583398 containerd[1557]: time="2026-03-02T14:28:06.573919272Z" level=info msg="Container 0fa37db63c62edfdad0d80fef7625b1eb58c539388a15a44239e8757f3dd9750: CDI devices from CRI Config.CDIDevices: []" Mar 2 14:28:06.713847 containerd[1557]: time="2026-03-02T14:28:06.713406602Z" level=info msg="CreateContainer within sandbox \"3c1425edf5341fac0b55b6ca889be4d90103d158549ffd3a35b3783be884249c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0fa37db63c62edfdad0d80fef7625b1eb58c539388a15a44239e8757f3dd9750\"" Mar 2 14:28:06.742171 containerd[1557]: time="2026-03-02T14:28:06.741309824Z" level=info msg="StartContainer for \"0fa37db63c62edfdad0d80fef7625b1eb58c539388a15a44239e8757f3dd9750\"" Mar 2 14:28:06.754211 containerd[1557]: time="2026-03-02T14:28:06.754000247Z" level=info msg="connecting to shim 0fa37db63c62edfdad0d80fef7625b1eb58c539388a15a44239e8757f3dd9750" address="unix:///run/containerd/s/741221725027513160b1455e1fa9d89bb65c63b99673c086f03bcdc304d59b23" protocol=ttrpc version=3 Mar 2 14:28:06.846276 containerd[1557]: time="2026-03-02T14:28:06.845864574Z" level=info msg="connecting to shim 085b32ea563e463c36b03a3ff6f3ac383c03aea3ec66ac3401a2863600072d4b" address="unix:///run/containerd/s/bfd9f8b10782e734b91862e692fadd7b3284d2bdfd483324c1860083374f841b" namespace=k8s.io protocol=ttrpc version=3 Mar 2 14:28:07.281848 systemd[1]: Started cri-containerd-0fa37db63c62edfdad0d80fef7625b1eb58c539388a15a44239e8757f3dd9750.scope - libcontainer container 0fa37db63c62edfdad0d80fef7625b1eb58c539388a15a44239e8757f3dd9750. Mar 2 14:28:07.360356 systemd[1]: Started cri-containerd-085b32ea563e463c36b03a3ff6f3ac383c03aea3ec66ac3401a2863600072d4b.scope - libcontainer container 085b32ea563e463c36b03a3ff6f3ac383c03aea3ec66ac3401a2863600072d4b. Mar 2 14:28:08.252725 containerd[1557]: time="2026-03-02T14:28:08.252271213Z" level=info msg="StartContainer for \"0fa37db63c62edfdad0d80fef7625b1eb58c539388a15a44239e8757f3dd9750\" returns successfully" Mar 2 14:28:08.839978 kubelet[2864]: E0302 14:28:08.839883 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:28:09.133456 containerd[1557]: time="2026-03-02T14:28:09.132952521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6447996989-5d5xn,Uid:90425291-1c64-4651-bd4f-9da0a3ec71c1,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"085b32ea563e463c36b03a3ff6f3ac383c03aea3ec66ac3401a2863600072d4b\"" Mar 2 14:28:09.198482 containerd[1557]: time="2026-03-02T14:28:09.190868654Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.3\"" Mar 2 14:28:10.014005 kubelet[2864]: E0302 14:28:10.011862 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:28:11.711216 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1363887472.mount: Deactivated successfully. Mar 2 14:28:42.559888 containerd[1557]: time="2026-03-02T14:28:42.559823490Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:28:42.570415 containerd[1557]: time="2026-03-02T14:28:42.567961150Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.3: active requests=0, bytes read=40822719" Mar 2 14:28:42.579863 containerd[1557]: time="2026-03-02T14:28:42.576848658Z" level=info msg="ImageCreate event name:\"sha256:de15454df5913bb69360783a4d76287caf2c87324eed18162e79d4c06a4c8896\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:28:42.604460 containerd[1557]: time="2026-03-02T14:28:42.604410921Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:3b1a6762e1f3fae8490773b8f06ddd1e6775850febbece4d6002416f39adc670\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:28:42.611537 containerd[1557]: time="2026-03-02T14:28:42.611220048Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.3\" with image id \"sha256:de15454df5913bb69360783a4d76287caf2c87324eed18162e79d4c06a4c8896\", repo tag \"quay.io/tigera/operator:v1.40.3\", repo digest \"quay.io/tigera/operator@sha256:3b1a6762e1f3fae8490773b8f06ddd1e6775850febbece4d6002416f39adc670\", size \"40818714\" in 33.42029062s" Mar 2 14:28:42.611537 containerd[1557]: time="2026-03-02T14:28:42.611264320Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.3\" returns image reference \"sha256:de15454df5913bb69360783a4d76287caf2c87324eed18162e79d4c06a4c8896\"" Mar 2 14:28:42.690205 containerd[1557]: time="2026-03-02T14:28:42.689657881Z" level=info msg="CreateContainer within sandbox \"085b32ea563e463c36b03a3ff6f3ac383c03aea3ec66ac3401a2863600072d4b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 2 14:28:42.798896 containerd[1557]: time="2026-03-02T14:28:42.798737822Z" level=info msg="Container a0fc2971e440ad6abf0e89eb719995f9481b094706f5514839645e39b0b291c1: CDI devices from CRI Config.CDIDevices: []" Mar 2 14:28:42.920336 containerd[1557]: time="2026-03-02T14:28:42.918644740Z" level=info msg="CreateContainer within sandbox \"085b32ea563e463c36b03a3ff6f3ac383c03aea3ec66ac3401a2863600072d4b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a0fc2971e440ad6abf0e89eb719995f9481b094706f5514839645e39b0b291c1\"" Mar 2 14:28:42.945748 containerd[1557]: time="2026-03-02T14:28:42.941543425Z" level=info msg="StartContainer for \"a0fc2971e440ad6abf0e89eb719995f9481b094706f5514839645e39b0b291c1\"" Mar 2 14:28:42.948721 containerd[1557]: time="2026-03-02T14:28:42.948687276Z" level=info msg="connecting to shim a0fc2971e440ad6abf0e89eb719995f9481b094706f5514839645e39b0b291c1" address="unix:///run/containerd/s/bfd9f8b10782e734b91862e692fadd7b3284d2bdfd483324c1860083374f841b" protocol=ttrpc version=3 Mar 2 14:28:43.478525 systemd[1]: Started cri-containerd-a0fc2971e440ad6abf0e89eb719995f9481b094706f5514839645e39b0b291c1.scope - libcontainer container a0fc2971e440ad6abf0e89eb719995f9481b094706f5514839645e39b0b291c1. Mar 2 14:28:43.848537 containerd[1557]: time="2026-03-02T14:28:43.845727644Z" level=info msg="StartContainer for \"a0fc2971e440ad6abf0e89eb719995f9481b094706f5514839645e39b0b291c1\" returns successfully" Mar 2 14:28:44.729805 kubelet[2864]: I0302 14:28:44.723967 2864 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-cxdmr" podStartSLOduration=49.723952753 podStartE2EDuration="49.723952753s" podCreationTimestamp="2026-03-02 14:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 14:28:09.050715827 +0000 UTC m=+16.027870954" watchObservedRunningTime="2026-03-02 14:28:44.723952753 +0000 UTC m=+51.701107901" Mar 2 14:28:44.739766 kubelet[2864]: I0302 14:28:44.733713 2864 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6447996989-5d5xn" podStartSLOduration=6.292712816 podStartE2EDuration="39.733703302s" podCreationTimestamp="2026-03-02 14:28:05 +0000 UTC" firstStartedPulling="2026-03-02 14:28:09.173946307 +0000 UTC m=+16.151101424" lastFinishedPulling="2026-03-02 14:28:42.614936783 +0000 UTC m=+49.592091910" observedRunningTime="2026-03-02 14:28:44.723399923 +0000 UTC m=+51.700555050" watchObservedRunningTime="2026-03-02 14:28:44.733703302 +0000 UTC m=+51.710858429" Mar 2 14:29:09.482479 kubelet[2864]: E0302 14:29:09.482441 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:29:12.656342 sudo[1778]: pam_unix(sudo:session): session closed for user root Mar 2 14:29:12.690760 sshd[1777]: Connection closed by 10.0.0.1 port 51302 Mar 2 14:29:12.727446 sshd-session[1774]: pam_unix(sshd:session): session closed for user core Mar 2 14:29:12.800456 systemd[1]: sshd@8-10.0.0.7:22-10.0.0.1:51302.service: Deactivated successfully. Mar 2 14:29:12.837442 systemd[1]: session-9.scope: Deactivated successfully. Mar 2 14:29:12.847807 systemd[1]: session-9.scope: Consumed 17.374s CPU time, 232.3M memory peak. Mar 2 14:29:12.910876 systemd-logind[1531]: Session 9 logged out. Waiting for processes to exit. Mar 2 14:29:12.947962 systemd-logind[1531]: Removed session 9. Mar 2 14:29:19.454593 kubelet[2864]: E0302 14:29:19.454552 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:29:26.470157 kubelet[2864]: E0302 14:29:26.469951 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:29:30.442297 systemd[1]: Created slice kubepods-besteffort-pod80c776fe_8375_4802_9344_2aea9db1d5e7.slice - libcontainer container kubepods-besteffort-pod80c776fe_8375_4802_9344_2aea9db1d5e7.slice. Mar 2 14:29:30.499413 kubelet[2864]: I0302 14:29:30.498229 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/80c776fe-8375-4802-9344-2aea9db1d5e7-typha-certs\") pod \"calico-typha-56ff5c6d67-z8kb2\" (UID: \"80c776fe-8375-4802-9344-2aea9db1d5e7\") " pod="calico-system/calico-typha-56ff5c6d67-z8kb2" Mar 2 14:29:30.499413 kubelet[2864]: I0302 14:29:30.498282 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zrhh\" (UniqueName: \"kubernetes.io/projected/80c776fe-8375-4802-9344-2aea9db1d5e7-kube-api-access-8zrhh\") pod \"calico-typha-56ff5c6d67-z8kb2\" (UID: \"80c776fe-8375-4802-9344-2aea9db1d5e7\") " pod="calico-system/calico-typha-56ff5c6d67-z8kb2" Mar 2 14:29:30.499413 kubelet[2864]: I0302 14:29:30.498311 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80c776fe-8375-4802-9344-2aea9db1d5e7-tigera-ca-bundle\") pod \"calico-typha-56ff5c6d67-z8kb2\" (UID: \"80c776fe-8375-4802-9344-2aea9db1d5e7\") " pod="calico-system/calico-typha-56ff5c6d67-z8kb2" Mar 2 14:29:30.861199 kubelet[2864]: E0302 14:29:30.857265 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:29:30.869438 containerd[1557]: time="2026-03-02T14:29:30.868757410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56ff5c6d67-z8kb2,Uid:80c776fe-8375-4802-9344-2aea9db1d5e7,Namespace:calico-system,Attempt:0,}" Mar 2 14:29:31.399556 systemd[1]: Created slice kubepods-besteffort-pod6d9322a9_4845_4753_a1ba_830188be584c.slice - libcontainer container kubepods-besteffort-pod6d9322a9_4845_4753_a1ba_830188be584c.slice. Mar 2 14:29:31.460915 containerd[1557]: time="2026-03-02T14:29:31.460863573Z" level=info msg="connecting to shim dfea084b24bffdc0755d36ffc5ced753e94c41dd0770ba816e99eec23efe6e39" address="unix:///run/containerd/s/d719ce2e3b4514a718216b8b977bc7b7c0af5737d76fe581e79cf48dbf8a9686" namespace=k8s.io protocol=ttrpc version=3 Mar 2 14:29:31.477276 kubelet[2864]: I0302 14:29:31.477227 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/6d9322a9-4845-4753-a1ba-830188be584c-nodeproc\") pod \"calico-node-22psh\" (UID: \"6d9322a9-4845-4753-a1ba-830188be584c\") " pod="calico-system/calico-node-22psh" Mar 2 14:29:31.477516 kubelet[2864]: I0302 14:29:31.477496 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/6d9322a9-4845-4753-a1ba-830188be584c-bpffs\") pod \"calico-node-22psh\" (UID: \"6d9322a9-4845-4753-a1ba-830188be584c\") " pod="calico-system/calico-node-22psh" Mar 2 14:29:31.477890 kubelet[2864]: I0302 14:29:31.477831 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6d9322a9-4845-4753-a1ba-830188be584c-flexvol-driver-host\") pod \"calico-node-22psh\" (UID: \"6d9322a9-4845-4753-a1ba-830188be584c\") " pod="calico-system/calico-node-22psh" Mar 2 14:29:31.478005 kubelet[2864]: I0302 14:29:31.477986 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2t5n\" (UniqueName: \"kubernetes.io/projected/6d9322a9-4845-4753-a1ba-830188be584c-kube-api-access-s2t5n\") pod \"calico-node-22psh\" (UID: \"6d9322a9-4845-4753-a1ba-830188be584c\") " pod="calico-system/calico-node-22psh" Mar 2 14:29:31.508325 kubelet[2864]: I0302 14:29:31.508273 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6d9322a9-4845-4753-a1ba-830188be584c-cni-bin-dir\") pod \"calico-node-22psh\" (UID: \"6d9322a9-4845-4753-a1ba-830188be584c\") " pod="calico-system/calico-node-22psh" Mar 2 14:29:31.509462 kubelet[2864]: I0302 14:29:31.509239 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d9322a9-4845-4753-a1ba-830188be584c-tigera-ca-bundle\") pod \"calico-node-22psh\" (UID: \"6d9322a9-4845-4753-a1ba-830188be584c\") " pod="calico-system/calico-node-22psh" Mar 2 14:29:31.519232 kubelet[2864]: I0302 14:29:31.519188 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6d9322a9-4845-4753-a1ba-830188be584c-var-run-calico\") pod \"calico-node-22psh\" (UID: \"6d9322a9-4845-4753-a1ba-830188be584c\") " pod="calico-system/calico-node-22psh" Mar 2 14:29:31.519466 kubelet[2864]: I0302 14:29:31.519440 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6d9322a9-4845-4753-a1ba-830188be584c-cni-log-dir\") pod \"calico-node-22psh\" (UID: \"6d9322a9-4845-4753-a1ba-830188be584c\") " pod="calico-system/calico-node-22psh" Mar 2 14:29:31.519860 kubelet[2864]: I0302 14:29:31.519575 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d9322a9-4845-4753-a1ba-830188be584c-lib-modules\") pod \"calico-node-22psh\" (UID: \"6d9322a9-4845-4753-a1ba-830188be584c\") " pod="calico-system/calico-node-22psh" Mar 2 14:29:31.520934 kubelet[2864]: I0302 14:29:31.520911 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6d9322a9-4845-4753-a1ba-830188be584c-node-certs\") pod \"calico-node-22psh\" (UID: \"6d9322a9-4845-4753-a1ba-830188be584c\") " pod="calico-system/calico-node-22psh" Mar 2 14:29:31.521272 kubelet[2864]: I0302 14:29:31.521250 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d9322a9-4845-4753-a1ba-830188be584c-xtables-lock\") pod \"calico-node-22psh\" (UID: \"6d9322a9-4845-4753-a1ba-830188be584c\") " pod="calico-system/calico-node-22psh" Mar 2 14:29:31.521460 kubelet[2864]: I0302 14:29:31.521376 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6d9322a9-4845-4753-a1ba-830188be584c-var-lib-calico\") pod \"calico-node-22psh\" (UID: \"6d9322a9-4845-4753-a1ba-830188be584c\") " pod="calico-system/calico-node-22psh" Mar 2 14:29:31.560753 kubelet[2864]: I0302 14:29:31.552879 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6d9322a9-4845-4753-a1ba-830188be584c-cni-net-dir\") pod \"calico-node-22psh\" (UID: \"6d9322a9-4845-4753-a1ba-830188be584c\") " pod="calico-system/calico-node-22psh" Mar 2 14:29:31.560753 kubelet[2864]: I0302 14:29:31.553162 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6d9322a9-4845-4753-a1ba-830188be584c-policysync\") pod \"calico-node-22psh\" (UID: \"6d9322a9-4845-4753-a1ba-830188be584c\") " pod="calico-system/calico-node-22psh" Mar 2 14:29:31.560753 kubelet[2864]: I0302 14:29:31.553194 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/6d9322a9-4845-4753-a1ba-830188be584c-sys-fs\") pod \"calico-node-22psh\" (UID: \"6d9322a9-4845-4753-a1ba-830188be584c\") " pod="calico-system/calico-node-22psh" Mar 2 14:29:31.668954 kubelet[2864]: E0302 14:29:31.665222 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:29:31.807170 kubelet[2864]: E0302 14:29:31.801562 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:31.807170 kubelet[2864]: W0302 14:29:31.801597 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:31.807170 kubelet[2864]: E0302 14:29:31.801633 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:31.807170 kubelet[2864]: E0302 14:29:31.803952 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:31.807170 kubelet[2864]: W0302 14:29:31.803965 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:31.807170 kubelet[2864]: E0302 14:29:31.803982 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:31.829526 kubelet[2864]: E0302 14:29:31.829176 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:31.829526 kubelet[2864]: W0302 14:29:31.829211 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:31.829526 kubelet[2864]: E0302 14:29:31.829244 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:31.841439 kubelet[2864]: E0302 14:29:31.840830 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:31.841439 kubelet[2864]: W0302 14:29:31.840859 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:31.841439 kubelet[2864]: E0302 14:29:31.840889 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:31.865749 kubelet[2864]: E0302 14:29:31.857256 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:31.865749 kubelet[2864]: W0302 14:29:31.857282 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:31.865749 kubelet[2864]: E0302 14:29:31.857311 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:31.924000 kubelet[2864]: E0302 14:29:31.922212 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:31.924000 kubelet[2864]: W0302 14:29:31.922239 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:31.924000 kubelet[2864]: E0302 14:29:31.922271 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:31.931124 kubelet[2864]: E0302 14:29:31.930473 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:31.931124 kubelet[2864]: W0302 14:29:31.930497 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:31.937869 kubelet[2864]: E0302 14:29:31.933352 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:31.937869 kubelet[2864]: E0302 14:29:31.933767 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:31.937869 kubelet[2864]: W0302 14:29:31.933780 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:31.937869 kubelet[2864]: E0302 14:29:31.933796 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:31.979940 kubelet[2864]: E0302 14:29:31.978610 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:31.979940 kubelet[2864]: W0302 14:29:31.978638 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:31.979940 kubelet[2864]: E0302 14:29:31.981159 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:31.979940 kubelet[2864]: E0302 14:29:32.002299 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:31.979940 kubelet[2864]: W0302 14:29:32.002324 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:31.979940 kubelet[2864]: E0302 14:29:32.002352 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:31.979940 kubelet[2864]: E0302 14:29:32.002613 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:31.979940 kubelet[2864]: W0302 14:29:32.002623 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:31.979940 kubelet[2864]: E0302 14:29:32.002635 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:31.979940 kubelet[2864]: E0302 14:29:32.004195 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.016924 kubelet[2864]: W0302 14:29:32.004206 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.016924 kubelet[2864]: E0302 14:29:32.004219 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.016924 kubelet[2864]: E0302 14:29:32.004440 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.016924 kubelet[2864]: W0302 14:29:32.004450 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.016924 kubelet[2864]: E0302 14:29:32.004463 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.016924 kubelet[2864]: E0302 14:29:32.004894 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.016924 kubelet[2864]: W0302 14:29:32.004907 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.016924 kubelet[2864]: E0302 14:29:32.004920 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.016924 kubelet[2864]: I0302 14:29:32.004945 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/61e0daf6-539e-4fdf-8264-3ce0c18def88-kubelet-dir\") pod \"csi-node-driver-5824n\" (UID: \"61e0daf6-539e-4fdf-8264-3ce0c18def88\") " pod="calico-system/csi-node-driver-5824n" Mar 2 14:29:32.017348 kubelet[2864]: E0302 14:29:32.005360 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.017348 kubelet[2864]: W0302 14:29:32.005374 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.017348 kubelet[2864]: E0302 14:29:32.005390 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.017348 kubelet[2864]: I0302 14:29:32.005410 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/61e0daf6-539e-4fdf-8264-3ce0c18def88-registration-dir\") pod \"csi-node-driver-5824n\" (UID: \"61e0daf6-539e-4fdf-8264-3ce0c18def88\") " pod="calico-system/csi-node-driver-5824n" Mar 2 14:29:32.017348 kubelet[2864]: E0302 14:29:32.005622 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.017348 kubelet[2864]: W0302 14:29:32.005636 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.017348 kubelet[2864]: E0302 14:29:32.005647 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.017348 kubelet[2864]: I0302 14:29:32.010920 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/61e0daf6-539e-4fdf-8264-3ce0c18def88-socket-dir\") pod \"csi-node-driver-5824n\" (UID: \"61e0daf6-539e-4fdf-8264-3ce0c18def88\") " pod="calico-system/csi-node-driver-5824n" Mar 2 14:29:32.017348 kubelet[2864]: E0302 14:29:32.011385 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.018893 kubelet[2864]: W0302 14:29:32.011398 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.018893 kubelet[2864]: E0302 14:29:32.011414 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.018893 kubelet[2864]: E0302 14:29:32.011618 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.018893 kubelet[2864]: W0302 14:29:32.011629 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.018893 kubelet[2864]: E0302 14:29:32.011644 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.018893 kubelet[2864]: E0302 14:29:32.012297 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.018893 kubelet[2864]: W0302 14:29:32.012308 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.018893 kubelet[2864]: E0302 14:29:32.012324 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.018893 kubelet[2864]: E0302 14:29:32.012524 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.018893 kubelet[2864]: W0302 14:29:32.012533 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.019530 kubelet[2864]: E0302 14:29:32.012544 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.019530 kubelet[2864]: E0302 14:29:32.012818 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.019530 kubelet[2864]: W0302 14:29:32.012827 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.019530 kubelet[2864]: E0302 14:29:32.012838 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.019530 kubelet[2864]: E0302 14:29:32.013201 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.019530 kubelet[2864]: W0302 14:29:32.013210 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.019530 kubelet[2864]: E0302 14:29:32.013220 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.019530 kubelet[2864]: E0302 14:29:32.013426 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.019530 kubelet[2864]: W0302 14:29:32.013436 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.019530 kubelet[2864]: E0302 14:29:32.013446 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.022634 kubelet[2864]: E0302 14:29:32.017289 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.022634 kubelet[2864]: W0302 14:29:32.017303 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.022634 kubelet[2864]: E0302 14:29:32.017317 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.022634 kubelet[2864]: E0302 14:29:32.017542 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.022634 kubelet[2864]: W0302 14:29:32.017554 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.022634 kubelet[2864]: E0302 14:29:32.017567 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.022634 kubelet[2864]: E0302 14:29:32.017877 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.022634 kubelet[2864]: W0302 14:29:32.017887 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.022634 kubelet[2864]: E0302 14:29:32.017899 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.022634 kubelet[2864]: E0302 14:29:32.018468 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.023156 kubelet[2864]: W0302 14:29:32.018479 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.023156 kubelet[2864]: E0302 14:29:32.018492 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.023156 kubelet[2864]: E0302 14:29:32.018782 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.023156 kubelet[2864]: W0302 14:29:32.018792 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.023156 kubelet[2864]: E0302 14:29:32.018804 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.023156 kubelet[2864]: E0302 14:29:32.019222 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.023156 kubelet[2864]: W0302 14:29:32.019235 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.023156 kubelet[2864]: E0302 14:29:32.019248 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.025981 kubelet[2864]: E0302 14:29:32.024977 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.025981 kubelet[2864]: W0302 14:29:32.024992 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.025981 kubelet[2864]: E0302 14:29:32.025279 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.143299 kubelet[2864]: E0302 14:29:32.143255 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.158271 kubelet[2864]: W0302 14:29:32.155440 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.158947 kubelet[2864]: E0302 14:29:32.158928 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.159223 kubelet[2864]: I0302 14:29:32.159204 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/61e0daf6-539e-4fdf-8264-3ce0c18def88-varrun\") pod \"csi-node-driver-5824n\" (UID: \"61e0daf6-539e-4fdf-8264-3ce0c18def88\") " pod="calico-system/csi-node-driver-5824n" Mar 2 14:29:32.195547 kubelet[2864]: E0302 14:29:32.182550 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.195547 kubelet[2864]: W0302 14:29:32.182644 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.195547 kubelet[2864]: E0302 14:29:32.182751 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.202143 kubelet[2864]: E0302 14:29:32.196855 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.202338 kubelet[2864]: W0302 14:29:32.202310 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.202386 kubelet[2864]: E0302 14:29:32.202342 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.219911 kubelet[2864]: E0302 14:29:32.217879 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.219911 kubelet[2864]: W0302 14:29:32.217913 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.219911 kubelet[2864]: E0302 14:29:32.217942 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.241768 kubelet[2864]: E0302 14:29:32.241616 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.241768 kubelet[2864]: W0302 14:29:32.241763 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.242262 kubelet[2864]: E0302 14:29:32.241795 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.243632 kubelet[2864]: E0302 14:29:32.243392 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.243632 kubelet[2864]: W0302 14:29:32.243477 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.243632 kubelet[2864]: E0302 14:29:32.243495 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.256177 kubelet[2864]: E0302 14:29:32.247003 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.256177 kubelet[2864]: W0302 14:29:32.250973 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.256177 kubelet[2864]: E0302 14:29:32.251524 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.260262 kubelet[2864]: E0302 14:29:32.259570 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.260262 kubelet[2864]: W0302 14:29:32.259915 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.260262 kubelet[2864]: E0302 14:29:32.259946 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.266611 kubelet[2864]: E0302 14:29:32.265606 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.266611 kubelet[2864]: W0302 14:29:32.265756 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.266611 kubelet[2864]: E0302 14:29:32.265779 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.273432 kubelet[2864]: E0302 14:29:32.273318 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.273432 kubelet[2864]: W0302 14:29:32.273420 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.273616 kubelet[2864]: E0302 14:29:32.273452 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.294397 kubelet[2864]: E0302 14:29:32.281274 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.294397 kubelet[2864]: W0302 14:29:32.281296 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.294397 kubelet[2864]: E0302 14:29:32.281324 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.294397 kubelet[2864]: E0302 14:29:32.281655 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.294397 kubelet[2864]: W0302 14:29:32.281750 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.294397 kubelet[2864]: E0302 14:29:32.281765 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.294397 kubelet[2864]: E0302 14:29:32.286297 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.294397 kubelet[2864]: W0302 14:29:32.286312 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.294397 kubelet[2864]: E0302 14:29:32.286327 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.294397 kubelet[2864]: E0302 14:29:32.288407 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.294918 kubelet[2864]: W0302 14:29:32.288419 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.294918 kubelet[2864]: E0302 14:29:32.288437 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.294918 kubelet[2864]: E0302 14:29:32.292939 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.294918 kubelet[2864]: W0302 14:29:32.292953 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.294918 kubelet[2864]: E0302 14:29:32.292969 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.294918 kubelet[2864]: I0302 14:29:32.293877 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpflq\" (UniqueName: \"kubernetes.io/projected/61e0daf6-539e-4fdf-8264-3ce0c18def88-kube-api-access-vpflq\") pod \"csi-node-driver-5824n\" (UID: \"61e0daf6-539e-4fdf-8264-3ce0c18def88\") " pod="calico-system/csi-node-driver-5824n" Mar 2 14:29:32.294918 kubelet[2864]: E0302 14:29:32.294389 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.294918 kubelet[2864]: W0302 14:29:32.294403 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.294918 kubelet[2864]: E0302 14:29:32.294417 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.300983 kubelet[2864]: E0302 14:29:32.299242 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.300983 kubelet[2864]: W0302 14:29:32.299320 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.300983 kubelet[2864]: E0302 14:29:32.299338 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.341837 kubelet[2864]: E0302 14:29:32.341419 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.341837 kubelet[2864]: W0302 14:29:32.341520 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.341837 kubelet[2864]: E0302 14:29:32.341554 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.355177 kubelet[2864]: E0302 14:29:32.354873 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.355177 kubelet[2864]: W0302 14:29:32.354967 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.355177 kubelet[2864]: E0302 14:29:32.354996 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.362251 kubelet[2864]: E0302 14:29:32.361957 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.369741 kubelet[2864]: W0302 14:29:32.364149 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.369741 kubelet[2864]: E0302 14:29:32.364248 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.369741 kubelet[2864]: E0302 14:29:32.367834 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.369741 kubelet[2864]: W0302 14:29:32.367849 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.369741 kubelet[2864]: E0302 14:29:32.367869 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.371856 kubelet[2864]: E0302 14:29:32.371766 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.371856 kubelet[2864]: W0302 14:29:32.371785 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.371856 kubelet[2864]: E0302 14:29:32.371807 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.393557 systemd[1]: Started cri-containerd-dfea084b24bffdc0755d36ffc5ced753e94c41dd0770ba816e99eec23efe6e39.scope - libcontainer container dfea084b24bffdc0755d36ffc5ced753e94c41dd0770ba816e99eec23efe6e39. Mar 2 14:29:32.406567 kubelet[2864]: E0302 14:29:32.405355 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.406567 kubelet[2864]: W0302 14:29:32.405381 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.406567 kubelet[2864]: E0302 14:29:32.405405 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.409827 kubelet[2864]: E0302 14:29:32.407126 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.409827 kubelet[2864]: W0302 14:29:32.407141 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.409827 kubelet[2864]: E0302 14:29:32.407156 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.412277 kubelet[2864]: E0302 14:29:32.412132 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.412277 kubelet[2864]: W0302 14:29:32.412153 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.412277 kubelet[2864]: E0302 14:29:32.412168 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.443521 kubelet[2864]: E0302 14:29:32.423874 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.443521 kubelet[2864]: W0302 14:29:32.423906 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.443521 kubelet[2864]: E0302 14:29:32.423933 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.443521 kubelet[2864]: E0302 14:29:32.424439 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.443521 kubelet[2864]: W0302 14:29:32.424450 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.443521 kubelet[2864]: E0302 14:29:32.424467 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.443521 kubelet[2864]: E0302 14:29:32.429516 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.443521 kubelet[2864]: W0302 14:29:32.429531 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.443521 kubelet[2864]: E0302 14:29:32.429553 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.443521 kubelet[2864]: E0302 14:29:32.430235 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.445759 kubelet[2864]: W0302 14:29:32.430247 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.445759 kubelet[2864]: E0302 14:29:32.430260 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.463238 kubelet[2864]: E0302 14:29:32.463200 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.463969 kubelet[2864]: W0302 14:29:32.463939 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.464416 kubelet[2864]: E0302 14:29:32.464394 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.491291 kubelet[2864]: E0302 14:29:32.467476 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.491291 kubelet[2864]: W0302 14:29:32.467553 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.491291 kubelet[2864]: E0302 14:29:32.467573 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.491291 kubelet[2864]: E0302 14:29:32.478160 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.491291 kubelet[2864]: W0302 14:29:32.478179 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.491291 kubelet[2864]: E0302 14:29:32.478199 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.530543 containerd[1557]: time="2026-03-02T14:29:32.530487828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-22psh,Uid:6d9322a9-4845-4753-a1ba-830188be584c,Namespace:calico-system,Attempt:0,}" Mar 2 14:29:32.723867 kubelet[2864]: E0302 14:29:32.720900 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:32.723867 kubelet[2864]: W0302 14:29:32.720931 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:32.723867 kubelet[2864]: E0302 14:29:32.720960 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:32.766883 containerd[1557]: time="2026-03-02T14:29:32.765211046Z" level=info msg="connecting to shim b5ee9445a9d2f03fb0a49fc4a5c2157e3ab2237ef3f7e08cd1fc75a4d6bd18b1" address="unix:///run/containerd/s/a6de1ac5193d90ce90ee0318c822a27c6bf3f08b22f0a97c3330820281e211c6" namespace=k8s.io protocol=ttrpc version=3 Mar 2 14:29:33.101363 systemd[1]: Started cri-containerd-b5ee9445a9d2f03fb0a49fc4a5c2157e3ab2237ef3f7e08cd1fc75a4d6bd18b1.scope - libcontainer container b5ee9445a9d2f03fb0a49fc4a5c2157e3ab2237ef3f7e08cd1fc75a4d6bd18b1. Mar 2 14:29:33.409506 containerd[1557]: time="2026-03-02T14:29:33.409402354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56ff5c6d67-z8kb2,Uid:80c776fe-8375-4802-9344-2aea9db1d5e7,Namespace:calico-system,Attempt:0,} returns sandbox id \"dfea084b24bffdc0755d36ffc5ced753e94c41dd0770ba816e99eec23efe6e39\"" Mar 2 14:29:33.438877 kubelet[2864]: E0302 14:29:33.438378 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:29:33.455992 kubelet[2864]: E0302 14:29:33.449879 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:29:33.467627 containerd[1557]: time="2026-03-02T14:29:33.450803056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.3\"" Mar 2 14:29:33.467627 containerd[1557]: time="2026-03-02T14:29:33.457771586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-22psh,Uid:6d9322a9-4845-4753-a1ba-830188be584c,Namespace:calico-system,Attempt:0,} returns sandbox id \"b5ee9445a9d2f03fb0a49fc4a5c2157e3ab2237ef3f7e08cd1fc75a4d6bd18b1\"" Mar 2 14:29:34.451949 kubelet[2864]: E0302 14:29:34.450339 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:29:34.552543 kubelet[2864]: E0302 14:29:34.551382 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:34.552543 kubelet[2864]: W0302 14:29:34.551412 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:34.552543 kubelet[2864]: E0302 14:29:34.551444 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:34.558901 kubelet[2864]: E0302 14:29:34.558203 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:34.558901 kubelet[2864]: W0302 14:29:34.558234 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:34.558901 kubelet[2864]: E0302 14:29:34.558263 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:34.561221 kubelet[2864]: E0302 14:29:34.560654 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:34.561221 kubelet[2864]: W0302 14:29:34.560812 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:34.561221 kubelet[2864]: E0302 14:29:34.560835 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:34.567378 kubelet[2864]: E0302 14:29:34.564751 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:34.567378 kubelet[2864]: W0302 14:29:34.564924 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:34.567378 kubelet[2864]: E0302 14:29:34.564957 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:34.567378 kubelet[2864]: E0302 14:29:34.567195 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:34.567378 kubelet[2864]: W0302 14:29:34.567208 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:34.567378 kubelet[2864]: E0302 14:29:34.567301 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:35.390380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1510377921.mount: Deactivated successfully. Mar 2 14:29:35.450156 kubelet[2864]: E0302 14:29:35.448006 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:29:37.458967 kubelet[2864]: E0302 14:29:37.457974 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:29:39.450280 kubelet[2864]: E0302 14:29:39.447002 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:29:41.475235 kubelet[2864]: E0302 14:29:41.475170 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:29:43.449527 kubelet[2864]: E0302 14:29:43.449473 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:29:45.451281 kubelet[2864]: E0302 14:29:45.450415 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:29:45.615946 containerd[1557]: time="2026-03-02T14:29:45.615828276Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:29:45.620775 containerd[1557]: time="2026-03-02T14:29:45.619971481Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.3: active requests=0, bytes read=36094696" Mar 2 14:29:45.626159 containerd[1557]: time="2026-03-02T14:29:45.624278459Z" level=info msg="ImageCreate event name:\"sha256:0aa5de4a226c8dff91be273305b5e55a8b7019ef516599fd15c7e4434085cd65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:29:45.635149 containerd[1557]: time="2026-03-02T14:29:45.634973108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:3e62cf98a20c42a1786397d0192cfb639634ef95c6f463ab92f0439a5c1a4ae5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:29:45.637825 containerd[1557]: time="2026-03-02T14:29:45.634973836Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.3\" with image id \"sha256:0aa5de4a226c8dff91be273305b5e55a8b7019ef516599fd15c7e4434085cd65\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:3e62cf98a20c42a1786397d0192cfb639634ef95c6f463ab92f0439a5c1a4ae5\", size \"36094550\" in 12.184129775s" Mar 2 14:29:45.637825 containerd[1557]: time="2026-03-02T14:29:45.635771216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.3\" returns image reference \"sha256:0aa5de4a226c8dff91be273305b5e55a8b7019ef516599fd15c7e4434085cd65\"" Mar 2 14:29:45.641798 containerd[1557]: time="2026-03-02T14:29:45.641443845Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\"" Mar 2 14:29:45.706202 containerd[1557]: time="2026-03-02T14:29:45.702204527Z" level=info msg="CreateContainer within sandbox \"dfea084b24bffdc0755d36ffc5ced753e94c41dd0770ba816e99eec23efe6e39\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 2 14:29:45.728668 containerd[1557]: time="2026-03-02T14:29:45.728612325Z" level=info msg="Container c6526af304d7c7933a572ca3f1846ca382a82ffcb3a1445e2ea2ab290cc83e0b: CDI devices from CRI Config.CDIDevices: []" Mar 2 14:29:45.773591 containerd[1557]: time="2026-03-02T14:29:45.773379759Z" level=info msg="CreateContainer within sandbox \"dfea084b24bffdc0755d36ffc5ced753e94c41dd0770ba816e99eec23efe6e39\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c6526af304d7c7933a572ca3f1846ca382a82ffcb3a1445e2ea2ab290cc83e0b\"" Mar 2 14:29:45.777244 containerd[1557]: time="2026-03-02T14:29:45.777214194Z" level=info msg="StartContainer for \"c6526af304d7c7933a572ca3f1846ca382a82ffcb3a1445e2ea2ab290cc83e0b\"" Mar 2 14:29:45.783340 containerd[1557]: time="2026-03-02T14:29:45.781670797Z" level=info msg="connecting to shim c6526af304d7c7933a572ca3f1846ca382a82ffcb3a1445e2ea2ab290cc83e0b" address="unix:///run/containerd/s/d719ce2e3b4514a718216b8b977bc7b7c0af5737d76fe581e79cf48dbf8a9686" protocol=ttrpc version=3 Mar 2 14:29:45.959187 systemd[1]: Started cri-containerd-c6526af304d7c7933a572ca3f1846ca382a82ffcb3a1445e2ea2ab290cc83e0b.scope - libcontainer container c6526af304d7c7933a572ca3f1846ca382a82ffcb3a1445e2ea2ab290cc83e0b. Mar 2 14:29:46.413553 containerd[1557]: time="2026-03-02T14:29:46.413419635Z" level=info msg="StartContainer for \"c6526af304d7c7933a572ca3f1846ca382a82ffcb3a1445e2ea2ab290cc83e0b\" returns successfully" Mar 2 14:29:47.482967 kubelet[2864]: E0302 14:29:47.447505 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:29:47.517166 kubelet[2864]: E0302 14:29:47.514864 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:29:47.599602 kubelet[2864]: E0302 14:29:47.599568 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:47.599904 kubelet[2864]: W0302 14:29:47.599876 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:47.600000 kubelet[2864]: E0302 14:29:47.599978 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:47.600476 kubelet[2864]: E0302 14:29:47.600458 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:47.600574 kubelet[2864]: W0302 14:29:47.600555 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:47.600660 kubelet[2864]: E0302 14:29:47.600642 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:47.601260 kubelet[2864]: E0302 14:29:47.601238 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:47.601372 kubelet[2864]: W0302 14:29:47.601352 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:47.601467 kubelet[2864]: E0302 14:29:47.601448 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:47.698175 kubelet[2864]: E0302 14:29:47.695541 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:47.698357 kubelet[2864]: W0302 14:29:47.698331 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:47.700188 kubelet[2864]: E0302 14:29:47.698421 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:47.720245 kubelet[2864]: E0302 14:29:47.718866 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:47.720435 kubelet[2864]: W0302 14:29:47.720408 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:47.720514 kubelet[2864]: E0302 14:29:47.720496 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:47.722938 kubelet[2864]: E0302 14:29:47.722914 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:47.723168 kubelet[2864]: W0302 14:29:47.723007 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:47.723255 kubelet[2864]: E0302 14:29:47.723240 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:47.723948 kubelet[2864]: E0302 14:29:47.723934 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:47.724245 kubelet[2864]: W0302 14:29:47.724228 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:47.724316 kubelet[2864]: E0302 14:29:47.724302 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:47.724943 kubelet[2864]: E0302 14:29:47.724683 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:47.726225 kubelet[2864]: W0302 14:29:47.726202 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:47.726913 kubelet[2864]: E0302 14:29:47.726895 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:47.738863 kubelet[2864]: E0302 14:29:47.738650 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:47.739185 kubelet[2864]: W0302 14:29:47.739160 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:47.739282 kubelet[2864]: E0302 14:29:47.739263 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:47.739668 kubelet[2864]: E0302 14:29:47.739652 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:47.744565 kubelet[2864]: W0302 14:29:47.741695 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:47.746616 kubelet[2864]: E0302 14:29:47.746508 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:47.747392 kubelet[2864]: E0302 14:29:47.747374 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:47.747483 kubelet[2864]: W0302 14:29:47.747464 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:47.747561 kubelet[2864]: E0302 14:29:47.747545 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:47.748005 kubelet[2864]: E0302 14:29:47.747990 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:47.748430 kubelet[2864]: W0302 14:29:47.748413 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:47.749198 kubelet[2864]: E0302 14:29:47.749178 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:47.757483 kubelet[2864]: E0302 14:29:47.757444 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:47.758205 kubelet[2864]: W0302 14:29:47.758178 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:47.758324 kubelet[2864]: E0302 14:29:47.758304 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:47.764258 kubelet[2864]: E0302 14:29:47.760609 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:47.764393 kubelet[2864]: W0302 14:29:47.764373 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:47.764487 kubelet[2864]: E0302 14:29:47.764468 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:47.767603 kubelet[2864]: E0302 14:29:47.767586 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:47.768005 kubelet[2864]: W0302 14:29:47.767981 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:47.768534 kubelet[2864]: E0302 14:29:47.768516 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:47.772338 kubelet[2864]: E0302 14:29:47.772275 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:47.772338 kubelet[2864]: W0302 14:29:47.772292 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:47.772338 kubelet[2864]: E0302 14:29:47.772311 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:47.774972 kubelet[2864]: E0302 14:29:47.774227 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:47.774972 kubelet[2864]: W0302 14:29:47.774245 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:47.774972 kubelet[2864]: E0302 14:29:47.774259 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:47.776002 kubelet[2864]: E0302 14:29:47.775857 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:47.776002 kubelet[2864]: W0302 14:29:47.775972 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:47.776002 kubelet[2864]: E0302 14:29:47.775986 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:47.778484 kubelet[2864]: E0302 14:29:47.778195 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:47.778484 kubelet[2864]: W0302 14:29:47.778213 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:47.778484 kubelet[2864]: E0302 14:29:47.778226 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:47.814268 kubelet[2864]: E0302 14:29:47.804320 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:47.814268 kubelet[2864]: W0302 14:29:47.804343 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:47.814268 kubelet[2864]: E0302 14:29:47.804370 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:47.814268 kubelet[2864]: E0302 14:29:47.811331 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:47.814268 kubelet[2864]: W0302 14:29:47.811345 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:47.814268 kubelet[2864]: E0302 14:29:47.811365 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:47.815975 kubelet[2864]: E0302 14:29:47.815956 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:47.816369 kubelet[2864]: W0302 14:29:47.816211 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:47.816369 kubelet[2864]: E0302 14:29:47.816239 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:47.816642 kubelet[2864]: E0302 14:29:47.816627 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:47.816803 kubelet[2864]: W0302 14:29:47.816786 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:47.816927 kubelet[2864]: E0302 14:29:47.816868 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:47.821669 kubelet[2864]: E0302 14:29:47.821644 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:47.824978 kubelet[2864]: W0302 14:29:47.821861 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:47.824978 kubelet[2864]: E0302 14:29:47.821885 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:47.827482 kubelet[2864]: E0302 14:29:47.827463 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:47.827579 kubelet[2864]: W0302 14:29:47.827562 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:47.827658 kubelet[2864]: E0302 14:29:47.827642 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:47.828596 kubelet[2864]: E0302 14:29:47.828450 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:47.828596 kubelet[2864]: W0302 14:29:47.828464 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:47.828596 kubelet[2864]: E0302 14:29:47.828476 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:47.829004 kubelet[2864]: E0302 14:29:47.828988 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:47.829518 kubelet[2864]: W0302 14:29:47.829218 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:47.829518 kubelet[2864]: E0302 14:29:47.829238 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:47.829845 kubelet[2864]: E0302 14:29:47.829830 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:47.829926 kubelet[2864]: W0302 14:29:47.829911 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:47.829995 kubelet[2864]: E0302 14:29:47.829981 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:47.830506 kubelet[2864]: E0302 14:29:47.830492 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:47.830586 kubelet[2864]: W0302 14:29:47.830572 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:47.831806 kubelet[2864]: E0302 14:29:47.830641 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:47.842836 kubelet[2864]: E0302 14:29:47.842803 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:47.842992 kubelet[2864]: W0302 14:29:47.842973 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:47.843381 kubelet[2864]: E0302 14:29:47.843360 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:47.851221 kubelet[2864]: E0302 14:29:47.851198 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:47.851362 kubelet[2864]: W0302 14:29:47.851345 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:47.851442 kubelet[2864]: E0302 14:29:47.851426 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:47.859411 kubelet[2864]: E0302 14:29:47.859308 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:47.859411 kubelet[2864]: W0302 14:29:47.859407 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:47.862231 kubelet[2864]: E0302 14:29:47.859434 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:47.862392 kubelet[2864]: E0302 14:29:47.862291 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:47.862454 kubelet[2864]: W0302 14:29:47.862392 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:47.862454 kubelet[2864]: E0302 14:29:47.862412 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:48.530470 kubelet[2864]: E0302 14:29:48.530434 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:29:48.623630 kubelet[2864]: E0302 14:29:48.623589 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:48.623630 kubelet[2864]: W0302 14:29:48.624294 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:48.625400 kubelet[2864]: E0302 14:29:48.625287 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:48.629106 kubelet[2864]: E0302 14:29:48.627503 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:48.629106 kubelet[2864]: W0302 14:29:48.627516 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:48.629106 kubelet[2864]: E0302 14:29:48.627530 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:48.646674 kubelet[2864]: E0302 14:29:48.640369 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:48.646674 kubelet[2864]: W0302 14:29:48.640456 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:48.646674 kubelet[2864]: E0302 14:29:48.640479 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:48.651265 kubelet[2864]: E0302 14:29:48.649388 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:48.651265 kubelet[2864]: W0302 14:29:48.649403 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:48.651265 kubelet[2864]: E0302 14:29:48.649421 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:48.663484 kubelet[2864]: I0302 14:29:48.656852 2864 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-56ff5c6d67-z8kb2" podStartSLOduration=6.466423961 podStartE2EDuration="18.656807734s" podCreationTimestamp="2026-03-02 14:29:30 +0000 UTC" firstStartedPulling="2026-03-02 14:29:33.447582889 +0000 UTC m=+100.424738026" lastFinishedPulling="2026-03-02 14:29:45.637966672 +0000 UTC m=+112.615121799" observedRunningTime="2026-03-02 14:29:47.723558453 +0000 UTC m=+114.700713600" watchObservedRunningTime="2026-03-02 14:29:48.656807734 +0000 UTC m=+115.633962871" Mar 2 14:29:48.670677 kubelet[2864]: E0302 14:29:48.669425 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:48.670677 kubelet[2864]: W0302 14:29:48.669523 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:48.670677 kubelet[2864]: E0302 14:29:48.669555 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:48.676852 kubelet[2864]: E0302 14:29:48.673474 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:48.676852 kubelet[2864]: W0302 14:29:48.673569 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:48.676852 kubelet[2864]: E0302 14:29:48.673592 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:48.676852 kubelet[2864]: E0302 14:29:48.673953 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:48.676852 kubelet[2864]: W0302 14:29:48.673964 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:48.676852 kubelet[2864]: E0302 14:29:48.673982 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:48.676852 kubelet[2864]: E0302 14:29:48.674401 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:48.676852 kubelet[2864]: W0302 14:29:48.674413 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:48.676852 kubelet[2864]: E0302 14:29:48.674429 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:48.676852 kubelet[2864]: E0302 14:29:48.674687 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:48.677586 kubelet[2864]: W0302 14:29:48.674698 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:48.677586 kubelet[2864]: E0302 14:29:48.674803 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:48.677586 kubelet[2864]: E0302 14:29:48.675245 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:48.677586 kubelet[2864]: W0302 14:29:48.675256 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:48.677586 kubelet[2864]: E0302 14:29:48.675268 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:48.677586 kubelet[2864]: E0302 14:29:48.675495 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:48.677586 kubelet[2864]: W0302 14:29:48.675505 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:48.677586 kubelet[2864]: E0302 14:29:48.675515 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:48.679784 kubelet[2864]: E0302 14:29:48.678908 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:48.679784 kubelet[2864]: W0302 14:29:48.678922 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:48.679784 kubelet[2864]: E0302 14:29:48.678934 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:48.701955 kubelet[2864]: E0302 14:29:48.683808 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:48.701955 kubelet[2864]: W0302 14:29:48.683890 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:48.701955 kubelet[2864]: E0302 14:29:48.683906 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:48.705806 kubelet[2864]: E0302 14:29:48.705387 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:48.705806 kubelet[2864]: W0302 14:29:48.705413 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:48.705806 kubelet[2864]: E0302 14:29:48.705436 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:48.720505 kubelet[2864]: E0302 14:29:48.711366 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:48.720505 kubelet[2864]: W0302 14:29:48.711383 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:48.720505 kubelet[2864]: E0302 14:29:48.711402 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:48.720505 kubelet[2864]: E0302 14:29:48.711922 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:48.720505 kubelet[2864]: W0302 14:29:48.711934 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:48.720505 kubelet[2864]: E0302 14:29:48.711949 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:48.729352 kubelet[2864]: E0302 14:29:48.725968 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:48.729352 kubelet[2864]: W0302 14:29:48.725996 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:48.729352 kubelet[2864]: E0302 14:29:48.726189 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:48.738513 kubelet[2864]: E0302 14:29:48.738242 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:48.738513 kubelet[2864]: W0302 14:29:48.738346 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:48.738513 kubelet[2864]: E0302 14:29:48.738375 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:48.739218 kubelet[2864]: E0302 14:29:48.739206 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:48.739218 kubelet[2864]: W0302 14:29:48.739217 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:48.739298 kubelet[2864]: E0302 14:29:48.739230 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:48.749393 kubelet[2864]: E0302 14:29:48.748388 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:48.749393 kubelet[2864]: W0302 14:29:48.748417 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:48.749393 kubelet[2864]: E0302 14:29:48.748443 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:48.751208 kubelet[2864]: E0302 14:29:48.750992 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:48.751263 kubelet[2864]: W0302 14:29:48.751240 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:48.751305 kubelet[2864]: E0302 14:29:48.751260 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:48.757534 kubelet[2864]: E0302 14:29:48.756953 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:48.757534 kubelet[2864]: W0302 14:29:48.756973 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:48.757534 kubelet[2864]: E0302 14:29:48.756989 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:48.757534 kubelet[2864]: E0302 14:29:48.757461 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:48.757534 kubelet[2864]: W0302 14:29:48.757472 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:48.757534 kubelet[2864]: E0302 14:29:48.757489 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:48.762467 kubelet[2864]: E0302 14:29:48.761241 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:48.762467 kubelet[2864]: W0302 14:29:48.761317 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:48.762467 kubelet[2864]: E0302 14:29:48.761331 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:48.819145 kubelet[2864]: E0302 14:29:48.789460 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:48.819145 kubelet[2864]: W0302 14:29:48.789659 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:48.819145 kubelet[2864]: E0302 14:29:48.789689 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:48.819145 kubelet[2864]: E0302 14:29:48.809399 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:48.819145 kubelet[2864]: W0302 14:29:48.809415 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:48.819145 kubelet[2864]: E0302 14:29:48.809437 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:48.819145 kubelet[2864]: E0302 14:29:48.812974 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:48.819145 kubelet[2864]: W0302 14:29:48.812989 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:48.819145 kubelet[2864]: E0302 14:29:48.813144 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:48.819145 kubelet[2864]: E0302 14:29:48.815689 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:48.819808 containerd[1557]: time="2026-03-02T14:29:48.804237032Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:29:48.819808 containerd[1557]: time="2026-03-02T14:29:48.812844199Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3: active requests=0, bytes read=4630152" Mar 2 14:29:48.831575 kubelet[2864]: W0302 14:29:48.815786 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:48.831575 kubelet[2864]: E0302 14:29:48.815804 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:48.831575 kubelet[2864]: E0302 14:29:48.819378 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:48.831575 kubelet[2864]: W0302 14:29:48.819391 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:48.831575 kubelet[2864]: E0302 14:29:48.819407 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:48.831575 kubelet[2864]: E0302 14:29:48.820001 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:48.831575 kubelet[2864]: W0302 14:29:48.820178 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:48.831575 kubelet[2864]: E0302 14:29:48.820196 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:48.831575 kubelet[2864]: E0302 14:29:48.825357 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:48.831575 kubelet[2864]: W0302 14:29:48.825371 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:48.842853 containerd[1557]: time="2026-03-02T14:29:48.824284629Z" level=info msg="ImageCreate event name:\"sha256:ecc2a8ca795d595c3a806abf201d701228ddc7a8373e906441c9470dfeadd022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:29:48.845226 kubelet[2864]: E0302 14:29:48.825387 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:48.845226 kubelet[2864]: E0302 14:29:48.827456 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:48.845226 kubelet[2864]: W0302 14:29:48.827467 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:48.845226 kubelet[2864]: E0302 14:29:48.827480 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:48.845226 kubelet[2864]: E0302 14:29:48.830398 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:48.845226 kubelet[2864]: W0302 14:29:48.830410 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:48.845226 kubelet[2864]: E0302 14:29:48.830422 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:48.853279 containerd[1557]: time="2026-03-02T14:29:48.846344108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:6cdc6cc2f7cdcbd4bf2d9b6a59c03ed98b5c47f22e467d78b5c06e5fd7bff132\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:29:48.854951 containerd[1557]: time="2026-03-02T14:29:48.854835668Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\" with image id \"sha256:ecc2a8ca795d595c3a806abf201d701228ddc7a8373e906441c9470dfeadd022\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:6cdc6cc2f7cdcbd4bf2d9b6a59c03ed98b5c47f22e467d78b5c06e5fd7bff132\", size \"6186157\" in 3.213356418s" Mar 2 14:29:48.854951 containerd[1557]: time="2026-03-02T14:29:48.854885972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\" returns image reference \"sha256:ecc2a8ca795d595c3a806abf201d701228ddc7a8373e906441c9470dfeadd022\"" Mar 2 14:29:48.952836 containerd[1557]: time="2026-03-02T14:29:48.951848762Z" level=info msg="CreateContainer within sandbox \"b5ee9445a9d2f03fb0a49fc4a5c2157e3ab2237ef3f7e08cd1fc75a4d6bd18b1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 2 14:29:49.042686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount784505349.mount: Deactivated successfully. Mar 2 14:29:49.050267 containerd[1557]: time="2026-03-02T14:29:49.048424960Z" level=info msg="Container 2f1d393859cf40f875151caad51e2624b2d43ca1a8da254e532491e9e6b56c84: CDI devices from CRI Config.CDIDevices: []" Mar 2 14:29:49.109680 containerd[1557]: time="2026-03-02T14:29:49.106669827Z" level=info msg="CreateContainer within sandbox \"b5ee9445a9d2f03fb0a49fc4a5c2157e3ab2237ef3f7e08cd1fc75a4d6bd18b1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2f1d393859cf40f875151caad51e2624b2d43ca1a8da254e532491e9e6b56c84\"" Mar 2 14:29:49.112196 containerd[1557]: time="2026-03-02T14:29:49.111450624Z" level=info msg="StartContainer for \"2f1d393859cf40f875151caad51e2624b2d43ca1a8da254e532491e9e6b56c84\"" Mar 2 14:29:49.128312 containerd[1557]: time="2026-03-02T14:29:49.127415325Z" level=info msg="connecting to shim 2f1d393859cf40f875151caad51e2624b2d43ca1a8da254e532491e9e6b56c84" address="unix:///run/containerd/s/a6de1ac5193d90ce90ee0318c822a27c6bf3f08b22f0a97c3330820281e211c6" protocol=ttrpc version=3 Mar 2 14:29:49.335508 systemd[1]: Started cri-containerd-2f1d393859cf40f875151caad51e2624b2d43ca1a8da254e532491e9e6b56c84.scope - libcontainer container 2f1d393859cf40f875151caad51e2624b2d43ca1a8da254e532491e9e6b56c84. Mar 2 14:29:49.450277 kubelet[2864]: E0302 14:29:49.448226 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:29:49.576001 kubelet[2864]: E0302 14:29:49.572859 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:29:49.633116 kubelet[2864]: E0302 14:29:49.631992 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:49.633116 kubelet[2864]: W0302 14:29:49.632198 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:49.633116 kubelet[2864]: E0302 14:29:49.632226 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:49.638293 kubelet[2864]: E0302 14:29:49.635161 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:49.638293 kubelet[2864]: W0302 14:29:49.635177 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:49.638293 kubelet[2864]: E0302 14:29:49.635197 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:49.659300 kubelet[2864]: E0302 14:29:49.658961 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:49.659300 kubelet[2864]: W0302 14:29:49.658989 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:49.662305 kubelet[2864]: E0302 14:29:49.661263 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:49.666919 kubelet[2864]: E0302 14:29:49.666809 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:49.666919 kubelet[2864]: W0302 14:29:49.666826 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:49.666919 kubelet[2864]: E0302 14:29:49.666843 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:49.669522 kubelet[2864]: E0302 14:29:49.669326 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:49.669522 kubelet[2864]: W0302 14:29:49.669342 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:49.669522 kubelet[2864]: E0302 14:29:49.669355 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:49.669636 kubelet[2864]: E0302 14:29:49.669564 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:49.669636 kubelet[2864]: W0302 14:29:49.669575 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:49.669636 kubelet[2864]: E0302 14:29:49.669586 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:49.676582 kubelet[2864]: E0302 14:29:49.676381 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:49.676582 kubelet[2864]: W0302 14:29:49.676400 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:49.676582 kubelet[2864]: E0302 14:29:49.676415 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:49.679807 kubelet[2864]: E0302 14:29:49.678558 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:49.679807 kubelet[2864]: W0302 14:29:49.678649 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:49.679807 kubelet[2864]: E0302 14:29:49.678664 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:49.681279 kubelet[2864]: E0302 14:29:49.681260 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:49.681365 kubelet[2864]: W0302 14:29:49.681347 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:49.681457 kubelet[2864]: E0302 14:29:49.681438 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:49.681874 kubelet[2864]: E0302 14:29:49.681858 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:49.681968 kubelet[2864]: W0302 14:29:49.681951 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:49.682277 kubelet[2864]: E0302 14:29:49.682214 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:49.692241 kubelet[2864]: E0302 14:29:49.692220 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:49.692347 kubelet[2864]: W0302 14:29:49.692328 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:49.698350 kubelet[2864]: E0302 14:29:49.692698 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:49.711443 kubelet[2864]: E0302 14:29:49.706411 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:49.711443 kubelet[2864]: W0302 14:29:49.706432 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:49.711443 kubelet[2864]: E0302 14:29:49.706452 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:49.711443 kubelet[2864]: E0302 14:29:49.707607 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:49.711443 kubelet[2864]: W0302 14:29:49.707620 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:49.711443 kubelet[2864]: E0302 14:29:49.707635 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:49.711443 kubelet[2864]: E0302 14:29:49.708000 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:49.711443 kubelet[2864]: W0302 14:29:49.708174 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:49.711443 kubelet[2864]: E0302 14:29:49.708191 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:49.711443 kubelet[2864]: E0302 14:29:49.709622 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:49.711894 kubelet[2864]: W0302 14:29:49.709637 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:49.711894 kubelet[2864]: E0302 14:29:49.709652 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:49.819281 kubelet[2864]: E0302 14:29:49.817923 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:49.819281 kubelet[2864]: W0302 14:29:49.817952 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:49.819281 kubelet[2864]: E0302 14:29:49.817976 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:49.819281 kubelet[2864]: E0302 14:29:49.819288 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:49.819537 kubelet[2864]: W0302 14:29:49.819300 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:49.819537 kubelet[2864]: E0302 14:29:49.819312 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:49.826970 kubelet[2864]: E0302 14:29:49.825806 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:49.826970 kubelet[2864]: W0302 14:29:49.825830 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:49.826970 kubelet[2864]: E0302 14:29:49.825848 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:49.828958 kubelet[2864]: E0302 14:29:49.828285 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:49.828958 kubelet[2864]: W0302 14:29:49.828381 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:49.831229 kubelet[2864]: E0302 14:29:49.828398 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:49.838279 containerd[1557]: time="2026-03-02T14:29:49.831397148Z" level=info msg="StartContainer for \"2f1d393859cf40f875151caad51e2624b2d43ca1a8da254e532491e9e6b56c84\" returns successfully" Mar 2 14:29:49.850592 kubelet[2864]: E0302 14:29:49.850555 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:49.850592 kubelet[2864]: W0302 14:29:49.850583 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:49.850592 kubelet[2864]: E0302 14:29:49.850609 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:49.855339 kubelet[2864]: E0302 14:29:49.854658 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:49.855339 kubelet[2864]: W0302 14:29:49.854673 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:49.855339 kubelet[2864]: E0302 14:29:49.854694 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:49.861295 kubelet[2864]: E0302 14:29:49.858999 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:49.861295 kubelet[2864]: W0302 14:29:49.860200 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:49.861295 kubelet[2864]: E0302 14:29:49.860224 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:49.865162 kubelet[2864]: E0302 14:29:49.862469 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:49.865162 kubelet[2864]: W0302 14:29:49.862484 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:49.865162 kubelet[2864]: E0302 14:29:49.862502 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:49.871796 kubelet[2864]: E0302 14:29:49.871651 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:49.871796 kubelet[2864]: W0302 14:29:49.871671 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:49.871796 kubelet[2864]: E0302 14:29:49.871694 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:49.878291 kubelet[2864]: E0302 14:29:49.878271 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:49.878386 kubelet[2864]: W0302 14:29:49.878371 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:49.878463 kubelet[2864]: E0302 14:29:49.878446 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:49.892483 kubelet[2864]: E0302 14:29:49.892352 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:49.892483 kubelet[2864]: W0302 14:29:49.892379 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:49.892483 kubelet[2864]: E0302 14:29:49.892402 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:49.893245 kubelet[2864]: E0302 14:29:49.893229 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:49.893329 kubelet[2864]: W0302 14:29:49.893315 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:49.893407 kubelet[2864]: E0302 14:29:49.893393 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:49.893816 kubelet[2864]: E0302 14:29:49.893797 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:49.893898 kubelet[2864]: W0302 14:29:49.893884 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:49.893957 kubelet[2864]: E0302 14:29:49.893945 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:49.894507 kubelet[2864]: E0302 14:29:49.894410 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:49.894507 kubelet[2864]: W0302 14:29:49.894424 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:49.894507 kubelet[2864]: E0302 14:29:49.894436 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:49.909257 kubelet[2864]: E0302 14:29:49.908195 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:49.909257 kubelet[2864]: W0302 14:29:49.908217 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:49.909257 kubelet[2864]: E0302 14:29:49.908238 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:49.911367 kubelet[2864]: E0302 14:29:49.910928 2864 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 14:29:49.914494 kubelet[2864]: W0302 14:29:49.911381 2864 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 14:29:49.914494 kubelet[2864]: E0302 14:29:49.912185 2864 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 14:29:49.920532 systemd[1]: cri-containerd-2f1d393859cf40f875151caad51e2624b2d43ca1a8da254e532491e9e6b56c84.scope: Deactivated successfully. Mar 2 14:29:49.969211 containerd[1557]: time="2026-03-02T14:29:49.963985054Z" level=info msg="received container exit event container_id:\"2f1d393859cf40f875151caad51e2624b2d43ca1a8da254e532491e9e6b56c84\" id:\"2f1d393859cf40f875151caad51e2624b2d43ca1a8da254e532491e9e6b56c84\" pid:3601 exited_at:{seconds:1772461789 nanos:957163828}" Mar 2 14:29:50.162982 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f1d393859cf40f875151caad51e2624b2d43ca1a8da254e532491e9e6b56c84-rootfs.mount: Deactivated successfully. Mar 2 14:29:50.625004 kubelet[2864]: E0302 14:29:50.619312 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:29:50.642191 containerd[1557]: time="2026-03-02T14:29:50.636401485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.3\"" Mar 2 14:29:51.450231 kubelet[2864]: E0302 14:29:51.448898 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:29:53.458822 kubelet[2864]: E0302 14:29:53.457683 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:29:55.448609 kubelet[2864]: E0302 14:29:55.448469 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:29:57.448137 kubelet[2864]: E0302 14:29:57.447569 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:29:57.484449 kubelet[2864]: E0302 14:29:57.481806 2864 kubelet_node_status.go:386] "Node not becoming ready in time after startup" Mar 2 14:29:59.452522 kubelet[2864]: E0302 14:29:59.451938 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:30:00.812469 kubelet[2864]: E0302 14:30:00.812397 2864 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 2 14:30:01.448475 kubelet[2864]: E0302 14:30:01.448415 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:30:03.452916 kubelet[2864]: E0302 14:30:03.451620 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:30:05.448167 kubelet[2864]: E0302 14:30:05.447924 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:30:05.824884 kubelet[2864]: E0302 14:30:05.824328 2864 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 2 14:30:07.451381 kubelet[2864]: E0302 14:30:07.451274 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:30:09.454224 kubelet[2864]: E0302 14:30:09.453422 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:30:10.834220 kubelet[2864]: E0302 14:30:10.829232 2864 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 2 14:30:11.453938 kubelet[2864]: E0302 14:30:11.452681 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:30:13.454712 kubelet[2864]: E0302 14:30:13.454579 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:30:15.453268 kubelet[2864]: E0302 14:30:15.453210 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:30:15.841590 kubelet[2864]: E0302 14:30:15.841356 2864 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 2 14:30:17.448552 kubelet[2864]: E0302 14:30:17.448495 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:30:19.449550 kubelet[2864]: E0302 14:30:19.449492 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:30:20.854977 kubelet[2864]: E0302 14:30:20.854914 2864 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 2 14:30:21.451249 kubelet[2864]: E0302 14:30:21.450988 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:30:23.452655 kubelet[2864]: E0302 14:30:23.452595 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:30:25.448285 kubelet[2864]: E0302 14:30:25.447338 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:30:25.476678 kubelet[2864]: E0302 14:30:25.475430 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:30:25.859433 kubelet[2864]: E0302 14:30:25.859217 2864 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 2 14:30:27.451511 kubelet[2864]: E0302 14:30:27.447756 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:30:29.454940 kubelet[2864]: E0302 14:30:29.454603 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:30:30.876526 kubelet[2864]: E0302 14:30:30.876453 2864 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 2 14:30:31.452466 kubelet[2864]: E0302 14:30:31.450462 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:30:33.450709 kubelet[2864]: E0302 14:30:33.448740 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:30:33.453119 kubelet[2864]: E0302 14:30:33.452201 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:30:35.459711 kubelet[2864]: E0302 14:30:35.459489 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:30:35.902646 kubelet[2864]: E0302 14:30:35.883356 2864 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 2 14:30:36.468191 kubelet[2864]: E0302 14:30:36.468154 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:30:37.450928 kubelet[2864]: E0302 14:30:37.450546 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:30:39.447274 kubelet[2864]: E0302 14:30:39.447225 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:30:40.930351 kubelet[2864]: E0302 14:30:40.922391 2864 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 2 14:30:41.463968 kubelet[2864]: E0302 14:30:41.463824 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:30:43.450746 kubelet[2864]: E0302 14:30:43.450580 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:30:45.449766 kubelet[2864]: E0302 14:30:45.449705 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:30:45.927007 kubelet[2864]: E0302 14:30:45.925782 2864 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 2 14:30:47.449500 kubelet[2864]: E0302 14:30:47.447793 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:30:49.454402 kubelet[2864]: E0302 14:30:49.454256 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:30:50.878386 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount255933743.mount: Deactivated successfully. Mar 2 14:30:50.955261 kubelet[2864]: E0302 14:30:50.955102 2864 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 2 14:30:51.111553 containerd[1557]: time="2026-03-02T14:30:51.111407923Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:30:51.117651 containerd[1557]: time="2026-03-02T14:30:51.116551138Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.3: active requests=0, bytes read=159483365" Mar 2 14:30:51.123490 containerd[1557]: time="2026-03-02T14:30:51.121534533Z" level=info msg="ImageCreate event name:\"sha256:f8495fa3f644ae70c7e5131c7baf23f80864678694dbf1a6a4d0557528433740\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:30:51.144433 containerd[1557]: time="2026-03-02T14:30:51.142574953Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:c7aefc80042b94800407ab45640b59402d2897ae8755b9d8370516e7b0e404bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:30:51.144961 containerd[1557]: time="2026-03-02T14:30:51.144620911Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.3\" with image id \"sha256:f8495fa3f644ae70c7e5131c7baf23f80864678694dbf1a6a4d0557528433740\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:c7aefc80042b94800407ab45640b59402d2897ae8755b9d8370516e7b0e404bc\", size \"159483227\" in 1m0.508101796s" Mar 2 14:30:51.144961 containerd[1557]: time="2026-03-02T14:30:51.144754010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.3\" returns image reference \"sha256:f8495fa3f644ae70c7e5131c7baf23f80864678694dbf1a6a4d0557528433740\"" Mar 2 14:30:51.188740 containerd[1557]: time="2026-03-02T14:30:51.184327382Z" level=info msg="CreateContainer within sandbox \"b5ee9445a9d2f03fb0a49fc4a5c2157e3ab2237ef3f7e08cd1fc75a4d6bd18b1\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 2 14:30:51.369690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3657705218.mount: Deactivated successfully. Mar 2 14:30:51.413264 containerd[1557]: time="2026-03-02T14:30:51.409592147Z" level=info msg="Container 84df9add57a3868131218b0beef5e716ffb3f2383d60a16f92a5b28edf80c191: CDI devices from CRI Config.CDIDevices: []" Mar 2 14:30:51.454174 kubelet[2864]: E0302 14:30:51.448600 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:30:51.704583 containerd[1557]: time="2026-03-02T14:30:51.702747706Z" level=info msg="CreateContainer within sandbox \"b5ee9445a9d2f03fb0a49fc4a5c2157e3ab2237ef3f7e08cd1fc75a4d6bd18b1\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"84df9add57a3868131218b0beef5e716ffb3f2383d60a16f92a5b28edf80c191\"" Mar 2 14:30:51.718930 containerd[1557]: time="2026-03-02T14:30:51.714379195Z" level=info msg="StartContainer for \"84df9add57a3868131218b0beef5e716ffb3f2383d60a16f92a5b28edf80c191\"" Mar 2 14:30:51.746438 containerd[1557]: time="2026-03-02T14:30:51.745183361Z" level=info msg="connecting to shim 84df9add57a3868131218b0beef5e716ffb3f2383d60a16f92a5b28edf80c191" address="unix:///run/containerd/s/a6de1ac5193d90ce90ee0318c822a27c6bf3f08b22f0a97c3330820281e211c6" protocol=ttrpc version=3 Mar 2 14:30:51.907420 systemd[1]: Started cri-containerd-84df9add57a3868131218b0beef5e716ffb3f2383d60a16f92a5b28edf80c191.scope - libcontainer container 84df9add57a3868131218b0beef5e716ffb3f2383d60a16f92a5b28edf80c191. Mar 2 14:30:52.547685 containerd[1557]: time="2026-03-02T14:30:52.547531789Z" level=info msg="StartContainer for \"84df9add57a3868131218b0beef5e716ffb3f2383d60a16f92a5b28edf80c191\" returns successfully" Mar 2 14:30:52.738316 systemd[1]: cri-containerd-84df9add57a3868131218b0beef5e716ffb3f2383d60a16f92a5b28edf80c191.scope: Deactivated successfully. Mar 2 14:30:52.756557 containerd[1557]: time="2026-03-02T14:30:52.756511290Z" level=info msg="received container exit event container_id:\"84df9add57a3868131218b0beef5e716ffb3f2383d60a16f92a5b28edf80c191\" id:\"84df9add57a3868131218b0beef5e716ffb3f2383d60a16f92a5b28edf80c191\" pid:3704 exited_at:{seconds:1772461852 nanos:755483114}" Mar 2 14:30:53.071653 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84df9add57a3868131218b0beef5e716ffb3f2383d60a16f92a5b28edf80c191-rootfs.mount: Deactivated successfully. Mar 2 14:30:53.467185 kubelet[2864]: E0302 14:30:53.459354 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:30:54.240999 containerd[1557]: time="2026-03-02T14:30:54.240957452Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.3\"" Mar 2 14:30:55.449234 kubelet[2864]: E0302 14:30:55.448758 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:30:55.964708 kubelet[2864]: E0302 14:30:55.964545 2864 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 2 14:30:56.453599 kubelet[2864]: E0302 14:30:56.452426 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:30:57.458237 kubelet[2864]: E0302 14:30:57.449842 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:30:59.461280 kubelet[2864]: E0302 14:30:59.448394 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:31:00.980195 kubelet[2864]: E0302 14:31:00.977275 2864 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 2 14:31:01.451322 kubelet[2864]: E0302 14:31:01.447463 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:31:03.451556 kubelet[2864]: E0302 14:31:03.451493 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:31:05.450344 kubelet[2864]: E0302 14:31:05.448854 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:31:05.985381 kubelet[2864]: E0302 14:31:05.984813 2864 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 2 14:31:07.449388 kubelet[2864]: E0302 14:31:07.449332 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:31:09.453728 kubelet[2864]: E0302 14:31:09.453527 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:31:10.995695 kubelet[2864]: E0302 14:31:10.993375 2864 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 2 14:31:11.452505 kubelet[2864]: E0302 14:31:11.449790 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:31:11.452505 kubelet[2864]: E0302 14:31:11.450610 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:31:13.449666 kubelet[2864]: E0302 14:31:13.449613 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:31:15.451230 kubelet[2864]: E0302 14:31:15.449826 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:31:16.008489 kubelet[2864]: E0302 14:31:16.000810 2864 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 2 14:31:17.449727 kubelet[2864]: E0302 14:31:17.447423 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:31:19.447462 kubelet[2864]: E0302 14:31:19.447405 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:31:21.018181 kubelet[2864]: E0302 14:31:21.015406 2864 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 2 14:31:21.471409 kubelet[2864]: E0302 14:31:21.467346 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:31:22.121655 containerd[1557]: time="2026-03-02T14:31:22.121599344Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:31:22.131767 containerd[1557]: time="2026-03-02T14:31:22.131716818Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.3: active requests=0, bytes read=70584418" Mar 2 14:31:22.140274 containerd[1557]: time="2026-03-02T14:31:22.140225361Z" level=info msg="ImageCreate event name:\"sha256:f2520fbaa2761d3cc6c294dcad9c4dc33442ee0c856af33cefd0da5346519691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:31:22.170251 containerd[1557]: time="2026-03-02T14:31:22.166841186Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:c25deb6a4b79f5e595eb464adf9fb3735ea5623889e249d5b3efa0b42ffcbb47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:31:22.178995 containerd[1557]: time="2026-03-02T14:31:22.172667043Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.3\" with image id \"sha256:f2520fbaa2761d3cc6c294dcad9c4dc33442ee0c856af33cefd0da5346519691\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:c25deb6a4b79f5e595eb464adf9fb3735ea5623889e249d5b3efa0b42ffcbb47\", size \"72140463\" in 27.930903137s" Mar 2 14:31:22.178995 containerd[1557]: time="2026-03-02T14:31:22.172711707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.3\" returns image reference \"sha256:f2520fbaa2761d3cc6c294dcad9c4dc33442ee0c856af33cefd0da5346519691\"" Mar 2 14:31:22.284172 containerd[1557]: time="2026-03-02T14:31:22.281474023Z" level=info msg="CreateContainer within sandbox \"b5ee9445a9d2f03fb0a49fc4a5c2157e3ab2237ef3f7e08cd1fc75a4d6bd18b1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 2 14:31:22.593877 containerd[1557]: time="2026-03-02T14:31:22.593516139Z" level=info msg="Container 4f69b1f4cf8c11dee9f99ffa16b15f2fbcfcd64e32af31066b883144884459dc: CDI devices from CRI Config.CDIDevices: []" Mar 2 14:31:22.717205 containerd[1557]: time="2026-03-02T14:31:22.709850408Z" level=info msg="CreateContainer within sandbox \"b5ee9445a9d2f03fb0a49fc4a5c2157e3ab2237ef3f7e08cd1fc75a4d6bd18b1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4f69b1f4cf8c11dee9f99ffa16b15f2fbcfcd64e32af31066b883144884459dc\"" Mar 2 14:31:22.724213 containerd[1557]: time="2026-03-02T14:31:22.719302392Z" level=info msg="StartContainer for \"4f69b1f4cf8c11dee9f99ffa16b15f2fbcfcd64e32af31066b883144884459dc\"" Mar 2 14:31:22.727436 containerd[1557]: time="2026-03-02T14:31:22.727364711Z" level=info msg="connecting to shim 4f69b1f4cf8c11dee9f99ffa16b15f2fbcfcd64e32af31066b883144884459dc" address="unix:///run/containerd/s/a6de1ac5193d90ce90ee0318c822a27c6bf3f08b22f0a97c3330820281e211c6" protocol=ttrpc version=3 Mar 2 14:31:23.067506 systemd[1]: Started cri-containerd-4f69b1f4cf8c11dee9f99ffa16b15f2fbcfcd64e32af31066b883144884459dc.scope - libcontainer container 4f69b1f4cf8c11dee9f99ffa16b15f2fbcfcd64e32af31066b883144884459dc. Mar 2 14:31:23.467243 kubelet[2864]: E0302 14:31:23.462471 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:31:23.817407 containerd[1557]: time="2026-03-02T14:31:23.814343486Z" level=info msg="StartContainer for \"4f69b1f4cf8c11dee9f99ffa16b15f2fbcfcd64e32af31066b883144884459dc\" returns successfully" Mar 2 14:31:25.448527 kubelet[2864]: E0302 14:31:25.447860 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:31:26.028592 kubelet[2864]: E0302 14:31:26.028439 2864 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 2 14:31:26.817567 systemd[1]: cri-containerd-4f69b1f4cf8c11dee9f99ffa16b15f2fbcfcd64e32af31066b883144884459dc.scope: Deactivated successfully. Mar 2 14:31:26.818120 systemd[1]: cri-containerd-4f69b1f4cf8c11dee9f99ffa16b15f2fbcfcd64e32af31066b883144884459dc.scope: Consumed 1.460s CPU time, 185.8M memory peak, 5.5M read from disk, 176.9M written to disk. Mar 2 14:31:26.828837 containerd[1557]: time="2026-03-02T14:31:26.828582246Z" level=info msg="received container exit event container_id:\"4f69b1f4cf8c11dee9f99ffa16b15f2fbcfcd64e32af31066b883144884459dc\" id:\"4f69b1f4cf8c11dee9f99ffa16b15f2fbcfcd64e32af31066b883144884459dc\" pid:3768 exited_at:{seconds:1772461886 nanos:820738836}" Mar 2 14:31:26.964179 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f69b1f4cf8c11dee9f99ffa16b15f2fbcfcd64e32af31066b883144884459dc-rootfs.mount: Deactivated successfully. Mar 2 14:31:27.419090 containerd[1557]: time="2026-03-02T14:31:27.418900406Z" level=info msg="CreateContainer within sandbox \"b5ee9445a9d2f03fb0a49fc4a5c2157e3ab2237ef3f7e08cd1fc75a4d6bd18b1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 2 14:31:27.449369 kubelet[2864]: E0302 14:31:27.449316 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:31:27.524689 containerd[1557]: time="2026-03-02T14:31:27.523455612Z" level=info msg="Container 6c067dfa041c9b547755c3b51a22e18c55b52099b2233d66eb81279c7b32e3f3: CDI devices from CRI Config.CDIDevices: []" Mar 2 14:31:27.525845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount144121709.mount: Deactivated successfully. Mar 2 14:31:27.609815 containerd[1557]: time="2026-03-02T14:31:27.607510458Z" level=info msg="CreateContainer within sandbox \"b5ee9445a9d2f03fb0a49fc4a5c2157e3ab2237ef3f7e08cd1fc75a4d6bd18b1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6c067dfa041c9b547755c3b51a22e18c55b52099b2233d66eb81279c7b32e3f3\"" Mar 2 14:31:27.610340 containerd[1557]: time="2026-03-02T14:31:27.610167986Z" level=info msg="StartContainer for \"6c067dfa041c9b547755c3b51a22e18c55b52099b2233d66eb81279c7b32e3f3\"" Mar 2 14:31:27.622130 containerd[1557]: time="2026-03-02T14:31:27.621888482Z" level=info msg="connecting to shim 6c067dfa041c9b547755c3b51a22e18c55b52099b2233d66eb81279c7b32e3f3" address="unix:///run/containerd/s/a6de1ac5193d90ce90ee0318c822a27c6bf3f08b22f0a97c3330820281e211c6" protocol=ttrpc version=3 Mar 2 14:31:27.721997 systemd[1]: Started cri-containerd-6c067dfa041c9b547755c3b51a22e18c55b52099b2233d66eb81279c7b32e3f3.scope - libcontainer container 6c067dfa041c9b547755c3b51a22e18c55b52099b2233d66eb81279c7b32e3f3. Mar 2 14:31:28.098421 containerd[1557]: time="2026-03-02T14:31:28.098121265Z" level=info msg="StartContainer for \"6c067dfa041c9b547755c3b51a22e18c55b52099b2233d66eb81279c7b32e3f3\" returns successfully" Mar 2 14:31:28.439814 kubelet[2864]: I0302 14:31:28.438725 2864 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-22psh" podStartSLOduration=3.616269992 podStartE2EDuration="1m57.438707233s" podCreationTimestamp="2026-03-02 14:29:31 +0000 UTC" firstStartedPulling="2026-03-02 14:29:33.496789908 +0000 UTC m=+100.473945035" lastFinishedPulling="2026-03-02 14:31:27.319227138 +0000 UTC m=+214.296382276" observedRunningTime="2026-03-02 14:31:28.432647859 +0000 UTC m=+215.409802986" watchObservedRunningTime="2026-03-02 14:31:28.438707233 +0000 UTC m=+215.415862349" Mar 2 14:31:28.494179 kubelet[2864]: E0302 14:31:28.484818 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:31:29.451571 kubelet[2864]: E0302 14:31:29.451344 2864 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5824n" podUID="61e0daf6-539e-4fdf-8264-3ce0c18def88" Mar 2 14:31:31.506959 systemd[1]: Created slice kubepods-besteffort-pod61e0daf6_539e_4fdf_8264_3ce0c18def88.slice - libcontainer container kubepods-besteffort-pod61e0daf6_539e_4fdf_8264_3ce0c18def88.slice. Mar 2 14:31:31.543169 containerd[1557]: time="2026-03-02T14:31:31.538233011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5824n,Uid:61e0daf6-539e-4fdf-8264-3ce0c18def88,Namespace:calico-system,Attempt:0,}" Mar 2 14:31:32.719441 systemd-networkd[1469]: cali2a22c1bca50: Link UP Mar 2 14:31:32.729224 systemd-networkd[1469]: cali2a22c1bca50: Gained carrier Mar 2 14:31:32.881563 containerd[1557]: 2026-03-02 14:31:31.928 [ERROR][3900] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 2 14:31:32.881563 containerd[1557]: 2026-03-02 14:31:32.082 [INFO][3900] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--5824n-eth0 csi-node-driver- calico-system 61e0daf6-539e-4fdf-8264-3ce0c18def88 896 0 2026-03-02 14:29:31 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5d8f55657d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-5824n eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2a22c1bca50 [] [] }} ContainerID="0e5dc5723c1982a1ba100988deaf865b95af6b535f9683053da71431b84593a3" Namespace="calico-system" Pod="csi-node-driver-5824n" WorkloadEndpoint="localhost-k8s-csi--node--driver--5824n-" Mar 2 14:31:32.881563 containerd[1557]: 2026-03-02 14:31:32.082 [INFO][3900] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0e5dc5723c1982a1ba100988deaf865b95af6b535f9683053da71431b84593a3" Namespace="calico-system" Pod="csi-node-driver-5824n" WorkloadEndpoint="localhost-k8s-csi--node--driver--5824n-eth0" Mar 2 14:31:32.881563 containerd[1557]: 2026-03-02 14:31:32.348 [INFO][3950] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0e5dc5723c1982a1ba100988deaf865b95af6b535f9683053da71431b84593a3" HandleID="k8s-pod-network.0e5dc5723c1982a1ba100988deaf865b95af6b535f9683053da71431b84593a3" Workload="localhost-k8s-csi--node--driver--5824n-eth0" Mar 2 14:31:32.883813 containerd[1557]: 2026-03-02 14:31:32.373 [INFO][3950] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0e5dc5723c1982a1ba100988deaf865b95af6b535f9683053da71431b84593a3" HandleID="k8s-pod-network.0e5dc5723c1982a1ba100988deaf865b95af6b535f9683053da71431b84593a3" Workload="localhost-k8s-csi--node--driver--5824n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000380680), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-5824n", "timestamp":"2026-03-02 14:31:32.348219212 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000704420)} Mar 2 14:31:32.883813 containerd[1557]: 2026-03-02 14:31:32.374 [INFO][3950] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 14:31:32.883813 containerd[1557]: 2026-03-02 14:31:32.374 [INFO][3950] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 14:31:32.883813 containerd[1557]: 2026-03-02 14:31:32.374 [INFO][3950] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 14:31:32.883813 containerd[1557]: 2026-03-02 14:31:32.398 [INFO][3950] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0e5dc5723c1982a1ba100988deaf865b95af6b535f9683053da71431b84593a3" host="localhost" Mar 2 14:31:32.883813 containerd[1557]: 2026-03-02 14:31:32.424 [INFO][3950] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 14:31:32.883813 containerd[1557]: 2026-03-02 14:31:32.447 [INFO][3950] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 14:31:32.883813 containerd[1557]: 2026-03-02 14:31:32.454 [INFO][3950] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 14:31:32.883813 containerd[1557]: 2026-03-02 14:31:32.467 [INFO][3950] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 14:31:32.883813 containerd[1557]: 2026-03-02 14:31:32.467 [INFO][3950] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0e5dc5723c1982a1ba100988deaf865b95af6b535f9683053da71431b84593a3" host="localhost" Mar 2 14:31:32.884370 containerd[1557]: 2026-03-02 14:31:32.506 [INFO][3950] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0e5dc5723c1982a1ba100988deaf865b95af6b535f9683053da71431b84593a3 Mar 2 14:31:32.884370 containerd[1557]: 2026-03-02 14:31:32.534 [INFO][3950] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0e5dc5723c1982a1ba100988deaf865b95af6b535f9683053da71431b84593a3" host="localhost" Mar 2 14:31:32.884370 containerd[1557]: 2026-03-02 14:31:32.576 [INFO][3950] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.0e5dc5723c1982a1ba100988deaf865b95af6b535f9683053da71431b84593a3" host="localhost" Mar 2 14:31:32.884370 containerd[1557]: 2026-03-02 14:31:32.580 [INFO][3950] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.0e5dc5723c1982a1ba100988deaf865b95af6b535f9683053da71431b84593a3" host="localhost" Mar 2 14:31:32.884370 containerd[1557]: 2026-03-02 14:31:32.580 [INFO][3950] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 14:31:32.884370 containerd[1557]: 2026-03-02 14:31:32.581 [INFO][3950] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="0e5dc5723c1982a1ba100988deaf865b95af6b535f9683053da71431b84593a3" HandleID="k8s-pod-network.0e5dc5723c1982a1ba100988deaf865b95af6b535f9683053da71431b84593a3" Workload="localhost-k8s-csi--node--driver--5824n-eth0" Mar 2 14:31:32.884554 containerd[1557]: 2026-03-02 14:31:32.607 [INFO][3900] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0e5dc5723c1982a1ba100988deaf865b95af6b535f9683053da71431b84593a3" Namespace="calico-system" Pod="csi-node-driver-5824n" WorkloadEndpoint="localhost-k8s-csi--node--driver--5824n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5824n-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"61e0daf6-539e-4fdf-8264-3ce0c18def88", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 14, 29, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5d8f55657d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-5824n", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2a22c1bca50", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 14:31:32.884687 containerd[1557]: 2026-03-02 14:31:32.608 [INFO][3900] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="0e5dc5723c1982a1ba100988deaf865b95af6b535f9683053da71431b84593a3" Namespace="calico-system" Pod="csi-node-driver-5824n" WorkloadEndpoint="localhost-k8s-csi--node--driver--5824n-eth0" Mar 2 14:31:32.884687 containerd[1557]: 2026-03-02 14:31:32.608 [INFO][3900] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2a22c1bca50 ContainerID="0e5dc5723c1982a1ba100988deaf865b95af6b535f9683053da71431b84593a3" Namespace="calico-system" Pod="csi-node-driver-5824n" WorkloadEndpoint="localhost-k8s-csi--node--driver--5824n-eth0" Mar 2 14:31:32.884687 containerd[1557]: 2026-03-02 14:31:32.737 [INFO][3900] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0e5dc5723c1982a1ba100988deaf865b95af6b535f9683053da71431b84593a3" Namespace="calico-system" Pod="csi-node-driver-5824n" WorkloadEndpoint="localhost-k8s-csi--node--driver--5824n-eth0" Mar 2 14:31:32.884785 containerd[1557]: 2026-03-02 14:31:32.749 [INFO][3900] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0e5dc5723c1982a1ba100988deaf865b95af6b535f9683053da71431b84593a3" Namespace="calico-system" Pod="csi-node-driver-5824n" WorkloadEndpoint="localhost-k8s-csi--node--driver--5824n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5824n-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"61e0daf6-539e-4fdf-8264-3ce0c18def88", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 14, 29, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5d8f55657d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0e5dc5723c1982a1ba100988deaf865b95af6b535f9683053da71431b84593a3", Pod:"csi-node-driver-5824n", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2a22c1bca50", MAC:"82:d0:e2:8d:24:43", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 14:31:32.884906 containerd[1557]: 2026-03-02 14:31:32.843 [INFO][3900] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0e5dc5723c1982a1ba100988deaf865b95af6b535f9683053da71431b84593a3" Namespace="calico-system" Pod="csi-node-driver-5824n" WorkloadEndpoint="localhost-k8s-csi--node--driver--5824n-eth0" Mar 2 14:31:33.720426 containerd[1557]: time="2026-03-02T14:31:33.720364699Z" level=info msg="connecting to shim 0e5dc5723c1982a1ba100988deaf865b95af6b535f9683053da71431b84593a3" address="unix:///run/containerd/s/6b2f9c0cdef4ca79a7a3e5ce3f9e3cf1872b6258f11dc03aee51ace9fa480583" namespace=k8s.io protocol=ttrpc version=3 Mar 2 14:31:33.998744 systemd[1]: Started cri-containerd-0e5dc5723c1982a1ba100988deaf865b95af6b535f9683053da71431b84593a3.scope - libcontainer container 0e5dc5723c1982a1ba100988deaf865b95af6b535f9683053da71431b84593a3. Mar 2 14:31:34.152623 systemd-resolved[1471]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 14:31:34.391505 containerd[1557]: time="2026-03-02T14:31:34.383230700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5824n,Uid:61e0daf6-539e-4fdf-8264-3ce0c18def88,Namespace:calico-system,Attempt:0,} returns sandbox id \"0e5dc5723c1982a1ba100988deaf865b95af6b535f9683053da71431b84593a3\"" Mar 2 14:31:34.409140 containerd[1557]: time="2026-03-02T14:31:34.402534392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.3\"" Mar 2 14:31:34.709796 systemd-networkd[1469]: cali2a22c1bca50: Gained IPv6LL Mar 2 14:31:37.491463 kubelet[2864]: I0302 14:31:37.478857 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/872b5038-60d8-43af-97b0-1e2a061524de-config\") pod \"goldmane-7d7658d587-nd82q\" (UID: \"872b5038-60d8-43af-97b0-1e2a061524de\") " pod="calico-system/goldmane-7d7658d587-nd82q" Mar 2 14:31:37.491463 kubelet[2864]: I0302 14:31:37.479106 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/872b5038-60d8-43af-97b0-1e2a061524de-goldmane-ca-bundle\") pod \"goldmane-7d7658d587-nd82q\" (UID: \"872b5038-60d8-43af-97b0-1e2a061524de\") " pod="calico-system/goldmane-7d7658d587-nd82q" Mar 2 14:31:37.491463 kubelet[2864]: I0302 14:31:37.479150 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv6l2\" (UniqueName: \"kubernetes.io/projected/365fbaf9-09d9-4956-996d-d2f0b2639b36-kube-api-access-tv6l2\") pod \"coredns-7d764666f9-c7ch9\" (UID: \"365fbaf9-09d9-4956-996d-d2f0b2639b36\") " pod="kube-system/coredns-7d764666f9-c7ch9" Mar 2 14:31:37.491463 kubelet[2864]: I0302 14:31:37.479179 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/365fbaf9-09d9-4956-996d-d2f0b2639b36-config-volume\") pod \"coredns-7d764666f9-c7ch9\" (UID: \"365fbaf9-09d9-4956-996d-d2f0b2639b36\") " pod="kube-system/coredns-7d764666f9-c7ch9" Mar 2 14:31:37.491463 kubelet[2864]: I0302 14:31:37.479202 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/872b5038-60d8-43af-97b0-1e2a061524de-goldmane-key-pair\") pod \"goldmane-7d7658d587-nd82q\" (UID: \"872b5038-60d8-43af-97b0-1e2a061524de\") " pod="calico-system/goldmane-7d7658d587-nd82q" Mar 2 14:31:37.497506 kubelet[2864]: I0302 14:31:37.479227 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87l77\" (UniqueName: \"kubernetes.io/projected/872b5038-60d8-43af-97b0-1e2a061524de-kube-api-access-87l77\") pod \"goldmane-7d7658d587-nd82q\" (UID: \"872b5038-60d8-43af-97b0-1e2a061524de\") " pod="calico-system/goldmane-7d7658d587-nd82q" Mar 2 14:31:37.656510 systemd[1]: Created slice kubepods-burstable-pod365fbaf9_09d9_4956_996d_d2f0b2639b36.slice - libcontainer container kubepods-burstable-pod365fbaf9_09d9_4956_996d_d2f0b2639b36.slice. Mar 2 14:31:37.701151 kubelet[2864]: I0302 14:31:37.699797 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6ndr\" (UniqueName: \"kubernetes.io/projected/31ca88c7-1dfe-4b28-9567-7a017c447e6b-kube-api-access-x6ndr\") pod \"coredns-7d764666f9-2mmb9\" (UID: \"31ca88c7-1dfe-4b28-9567-7a017c447e6b\") " pod="kube-system/coredns-7d764666f9-2mmb9" Mar 2 14:31:37.701151 kubelet[2864]: I0302 14:31:37.699889 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12150212-8273-40bd-997e-9c7946178ab2-whisker-ca-bundle\") pod \"whisker-79f4755f69-hg7tg\" (UID: \"12150212-8273-40bd-997e-9c7946178ab2\") " pod="calico-system/whisker-79f4755f69-hg7tg" Mar 2 14:31:37.701151 kubelet[2864]: I0302 14:31:37.699917 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/37b4d339-e594-441d-9fd4-8c635ff58006-calico-apiserver-certs\") pod \"calico-apiserver-5889764684-6prtp\" (UID: \"37b4d339-e594-441d-9fd4-8c635ff58006\") " pod="calico-system/calico-apiserver-5889764684-6prtp" Mar 2 14:31:37.701151 kubelet[2864]: I0302 14:31:37.700000 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psfgz\" (UniqueName: \"kubernetes.io/projected/37b4d339-e594-441d-9fd4-8c635ff58006-kube-api-access-psfgz\") pod \"calico-apiserver-5889764684-6prtp\" (UID: \"37b4d339-e594-441d-9fd4-8c635ff58006\") " pod="calico-system/calico-apiserver-5889764684-6prtp" Mar 2 14:31:37.701151 kubelet[2864]: I0302 14:31:37.700118 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31ca88c7-1dfe-4b28-9567-7a017c447e6b-config-volume\") pod \"coredns-7d764666f9-2mmb9\" (UID: \"31ca88c7-1dfe-4b28-9567-7a017c447e6b\") " pod="kube-system/coredns-7d764666f9-2mmb9" Mar 2 14:31:37.701441 kubelet[2864]: I0302 14:31:37.700160 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsm2g\" (UniqueName: \"kubernetes.io/projected/ebb83b34-2219-4519-8564-451e7c7ee41e-kube-api-access-bsm2g\") pod \"calico-apiserver-5889764684-7jfq2\" (UID: \"ebb83b34-2219-4519-8564-451e7c7ee41e\") " pod="calico-system/calico-apiserver-5889764684-7jfq2" Mar 2 14:31:37.701441 kubelet[2864]: I0302 14:31:37.700180 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bee3324f-10c9-42b6-94be-e2d2eb0c25a7-tigera-ca-bundle\") pod \"calico-kube-controllers-84fbdcbd7-l8fwr\" (UID: \"bee3324f-10c9-42b6-94be-e2d2eb0c25a7\") " pod="calico-system/calico-kube-controllers-84fbdcbd7-l8fwr" Mar 2 14:31:37.701441 kubelet[2864]: I0302 14:31:37.700200 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbgfm\" (UniqueName: \"kubernetes.io/projected/bee3324f-10c9-42b6-94be-e2d2eb0c25a7-kube-api-access-qbgfm\") pod \"calico-kube-controllers-84fbdcbd7-l8fwr\" (UID: \"bee3324f-10c9-42b6-94be-e2d2eb0c25a7\") " pod="calico-system/calico-kube-controllers-84fbdcbd7-l8fwr" Mar 2 14:31:37.701441 kubelet[2864]: I0302 14:31:37.700218 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/12150212-8273-40bd-997e-9c7946178ab2-whisker-backend-key-pair\") pod \"whisker-79f4755f69-hg7tg\" (UID: \"12150212-8273-40bd-997e-9c7946178ab2\") " pod="calico-system/whisker-79f4755f69-hg7tg" Mar 2 14:31:37.701441 kubelet[2864]: I0302 14:31:37.700242 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzcm9\" (UniqueName: \"kubernetes.io/projected/12150212-8273-40bd-997e-9c7946178ab2-kube-api-access-rzcm9\") pod \"whisker-79f4755f69-hg7tg\" (UID: \"12150212-8273-40bd-997e-9c7946178ab2\") " pod="calico-system/whisker-79f4755f69-hg7tg" Mar 2 14:31:37.701610 kubelet[2864]: I0302 14:31:37.700315 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ebb83b34-2219-4519-8564-451e7c7ee41e-calico-apiserver-certs\") pod \"calico-apiserver-5889764684-7jfq2\" (UID: \"ebb83b34-2219-4519-8564-451e7c7ee41e\") " pod="calico-system/calico-apiserver-5889764684-7jfq2" Mar 2 14:31:37.701610 kubelet[2864]: I0302 14:31:37.700337 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/12150212-8273-40bd-997e-9c7946178ab2-nginx-config\") pod \"whisker-79f4755f69-hg7tg\" (UID: \"12150212-8273-40bd-997e-9c7946178ab2\") " pod="calico-system/whisker-79f4755f69-hg7tg" Mar 2 14:31:37.814465 systemd[1]: Created slice kubepods-besteffort-pod12150212_8273_40bd_997e_9c7946178ab2.slice - libcontainer container kubepods-besteffort-pod12150212_8273_40bd_997e_9c7946178ab2.slice. Mar 2 14:31:38.131295 kubelet[2864]: E0302 14:31:38.131195 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:31:38.146220 systemd[1]: Created slice kubepods-besteffort-pod37b4d339_e594_441d_9fd4_8c635ff58006.slice - libcontainer container kubepods-besteffort-pod37b4d339_e594_441d_9fd4_8c635ff58006.slice. Mar 2 14:31:38.162216 containerd[1557]: time="2026-03-02T14:31:38.152630493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-c7ch9,Uid:365fbaf9-09d9-4956-996d-d2f0b2639b36,Namespace:kube-system,Attempt:0,}" Mar 2 14:31:38.274923 systemd[1]: Created slice kubepods-besteffort-podebb83b34_2219_4519_8564_451e7c7ee41e.slice - libcontainer container kubepods-besteffort-podebb83b34_2219_4519_8564_451e7c7ee41e.slice. Mar 2 14:31:38.306462 containerd[1557]: time="2026-03-02T14:31:38.306418545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5889764684-6prtp,Uid:37b4d339-e594-441d-9fd4-8c635ff58006,Namespace:calico-system,Attempt:0,}" Mar 2 14:31:38.383333 systemd[1]: Created slice kubepods-burstable-pod31ca88c7_1dfe_4b28_9567_7a017c447e6b.slice - libcontainer container kubepods-burstable-pod31ca88c7_1dfe_4b28_9567_7a017c447e6b.slice. Mar 2 14:31:38.453605 containerd[1557]: time="2026-03-02T14:31:38.449820202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79f4755f69-hg7tg,Uid:12150212-8273-40bd-997e-9c7946178ab2,Namespace:calico-system,Attempt:0,}" Mar 2 14:31:38.463237 containerd[1557]: time="2026-03-02T14:31:38.463160257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5889764684-7jfq2,Uid:ebb83b34-2219-4519-8564-451e7c7ee41e,Namespace:calico-system,Attempt:0,}" Mar 2 14:31:38.464229 kubelet[2864]: E0302 14:31:38.463571 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:31:38.509372 containerd[1557]: time="2026-03-02T14:31:38.465410716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-2mmb9,Uid:31ca88c7-1dfe-4b28-9567-7a017c447e6b,Namespace:kube-system,Attempt:0,}" Mar 2 14:31:38.520647 systemd[1]: Created slice kubepods-besteffort-pod872b5038_60d8_43af_97b0_1e2a061524de.slice - libcontainer container kubepods-besteffort-pod872b5038_60d8_43af_97b0_1e2a061524de.slice. Mar 2 14:31:38.596133 systemd[1]: Created slice kubepods-besteffort-podbee3324f_10c9_42b6_94be_e2d2eb0c25a7.slice - libcontainer container kubepods-besteffort-podbee3324f_10c9_42b6_94be_e2d2eb0c25a7.slice. Mar 2 14:31:38.655531 containerd[1557]: time="2026-03-02T14:31:38.653648986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7d7658d587-nd82q,Uid:872b5038-60d8-43af-97b0-1e2a061524de,Namespace:calico-system,Attempt:0,}" Mar 2 14:31:38.656773 containerd[1557]: time="2026-03-02T14:31:38.656522486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84fbdcbd7-l8fwr,Uid:bee3324f-10c9-42b6-94be-e2d2eb0c25a7,Namespace:calico-system,Attempt:0,}" Mar 2 14:31:38.846702 systemd-networkd[1469]: vxlan.calico: Link UP Mar 2 14:31:38.846718 systemd-networkd[1469]: vxlan.calico: Gained carrier Mar 2 14:31:40.243318 systemd-networkd[1469]: cali924689cf6dd: Link UP Mar 2 14:31:40.251550 systemd-networkd[1469]: cali924689cf6dd: Gained carrier Mar 2 14:31:40.344726 systemd-networkd[1469]: vxlan.calico: Gained IPv6LL Mar 2 14:31:40.481394 containerd[1557]: 2026-03-02 14:31:39.403 [INFO][4221] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--84fbdcbd7--l8fwr-eth0 calico-kube-controllers-84fbdcbd7- calico-system bee3324f-10c9-42b6-94be-e2d2eb0c25a7 1303 0 2026-03-02 14:29:32 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:84fbdcbd7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-84fbdcbd7-l8fwr eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali924689cf6dd [] [] }} ContainerID="2a0bfad2d401e823be40e7db8df9fae6fa9572b5ffa6f53beb6c64b75a1a4b7a" Namespace="calico-system" Pod="calico-kube-controllers-84fbdcbd7-l8fwr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84fbdcbd7--l8fwr-" Mar 2 14:31:40.481394 containerd[1557]: 2026-03-02 14:31:39.403 [INFO][4221] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2a0bfad2d401e823be40e7db8df9fae6fa9572b5ffa6f53beb6c64b75a1a4b7a" Namespace="calico-system" Pod="calico-kube-controllers-84fbdcbd7-l8fwr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84fbdcbd7--l8fwr-eth0" Mar 2 14:31:40.481394 containerd[1557]: 2026-03-02 14:31:39.718 [INFO][4292] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2a0bfad2d401e823be40e7db8df9fae6fa9572b5ffa6f53beb6c64b75a1a4b7a" HandleID="k8s-pod-network.2a0bfad2d401e823be40e7db8df9fae6fa9572b5ffa6f53beb6c64b75a1a4b7a" Workload="localhost-k8s-calico--kube--controllers--84fbdcbd7--l8fwr-eth0" Mar 2 14:31:40.482885 containerd[1557]: 2026-03-02 14:31:39.740 [INFO][4292] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="2a0bfad2d401e823be40e7db8df9fae6fa9572b5ffa6f53beb6c64b75a1a4b7a" HandleID="k8s-pod-network.2a0bfad2d401e823be40e7db8df9fae6fa9572b5ffa6f53beb6c64b75a1a4b7a" Workload="localhost-k8s-calico--kube--controllers--84fbdcbd7--l8fwr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00036e0b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-84fbdcbd7-l8fwr", "timestamp":"2026-03-02 14:31:39.718198717 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000fe000)} Mar 2 14:31:40.482885 containerd[1557]: 2026-03-02 14:31:39.740 [INFO][4292] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 14:31:40.482885 containerd[1557]: 2026-03-02 14:31:39.740 [INFO][4292] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 14:31:40.482885 containerd[1557]: 2026-03-02 14:31:39.740 [INFO][4292] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 14:31:40.482885 containerd[1557]: 2026-03-02 14:31:39.763 [INFO][4292] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.2a0bfad2d401e823be40e7db8df9fae6fa9572b5ffa6f53beb6c64b75a1a4b7a" host="localhost" Mar 2 14:31:40.482885 containerd[1557]: 2026-03-02 14:31:39.809 [INFO][4292] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 14:31:40.482885 containerd[1557]: 2026-03-02 14:31:39.860 [INFO][4292] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 14:31:40.482885 containerd[1557]: 2026-03-02 14:31:39.876 [INFO][4292] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 14:31:40.482885 containerd[1557]: 2026-03-02 14:31:39.914 [INFO][4292] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 14:31:40.483508 containerd[1557]: 2026-03-02 14:31:39.914 [INFO][4292] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2a0bfad2d401e823be40e7db8df9fae6fa9572b5ffa6f53beb6c64b75a1a4b7a" host="localhost" Mar 2 14:31:40.483508 containerd[1557]: 2026-03-02 14:31:39.936 [INFO][4292] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.2a0bfad2d401e823be40e7db8df9fae6fa9572b5ffa6f53beb6c64b75a1a4b7a Mar 2 14:31:40.483508 containerd[1557]: 2026-03-02 14:31:40.029 [INFO][4292] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2a0bfad2d401e823be40e7db8df9fae6fa9572b5ffa6f53beb6c64b75a1a4b7a" host="localhost" Mar 2 14:31:40.483508 containerd[1557]: 2026-03-02 14:31:40.074 [INFO][4292] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.2a0bfad2d401e823be40e7db8df9fae6fa9572b5ffa6f53beb6c64b75a1a4b7a" host="localhost" Mar 2 14:31:40.483508 containerd[1557]: 2026-03-02 14:31:40.109 [INFO][4292] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.2a0bfad2d401e823be40e7db8df9fae6fa9572b5ffa6f53beb6c64b75a1a4b7a" host="localhost" Mar 2 14:31:40.483508 containerd[1557]: 2026-03-02 14:31:40.112 [INFO][4292] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 14:31:40.483508 containerd[1557]: 2026-03-02 14:31:40.114 [INFO][4292] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="2a0bfad2d401e823be40e7db8df9fae6fa9572b5ffa6f53beb6c64b75a1a4b7a" HandleID="k8s-pod-network.2a0bfad2d401e823be40e7db8df9fae6fa9572b5ffa6f53beb6c64b75a1a4b7a" Workload="localhost-k8s-calico--kube--controllers--84fbdcbd7--l8fwr-eth0" Mar 2 14:31:40.483714 containerd[1557]: 2026-03-02 14:31:40.179 [INFO][4221] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2a0bfad2d401e823be40e7db8df9fae6fa9572b5ffa6f53beb6c64b75a1a4b7a" Namespace="calico-system" Pod="calico-kube-controllers-84fbdcbd7-l8fwr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84fbdcbd7--l8fwr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--84fbdcbd7--l8fwr-eth0", GenerateName:"calico-kube-controllers-84fbdcbd7-", Namespace:"calico-system", SelfLink:"", UID:"bee3324f-10c9-42b6-94be-e2d2eb0c25a7", ResourceVersion:"1303", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 14, 29, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84fbdcbd7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-84fbdcbd7-l8fwr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali924689cf6dd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 14:31:40.484165 containerd[1557]: 2026-03-02 14:31:40.179 [INFO][4221] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="2a0bfad2d401e823be40e7db8df9fae6fa9572b5ffa6f53beb6c64b75a1a4b7a" Namespace="calico-system" Pod="calico-kube-controllers-84fbdcbd7-l8fwr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84fbdcbd7--l8fwr-eth0" Mar 2 14:31:40.484165 containerd[1557]: 2026-03-02 14:31:40.179 [INFO][4221] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali924689cf6dd ContainerID="2a0bfad2d401e823be40e7db8df9fae6fa9572b5ffa6f53beb6c64b75a1a4b7a" Namespace="calico-system" Pod="calico-kube-controllers-84fbdcbd7-l8fwr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84fbdcbd7--l8fwr-eth0" Mar 2 14:31:40.484165 containerd[1557]: 2026-03-02 14:31:40.284 [INFO][4221] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2a0bfad2d401e823be40e7db8df9fae6fa9572b5ffa6f53beb6c64b75a1a4b7a" Namespace="calico-system" Pod="calico-kube-controllers-84fbdcbd7-l8fwr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84fbdcbd7--l8fwr-eth0" Mar 2 14:31:40.484261 containerd[1557]: 2026-03-02 14:31:40.329 [INFO][4221] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2a0bfad2d401e823be40e7db8df9fae6fa9572b5ffa6f53beb6c64b75a1a4b7a" Namespace="calico-system" Pod="calico-kube-controllers-84fbdcbd7-l8fwr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84fbdcbd7--l8fwr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--84fbdcbd7--l8fwr-eth0", GenerateName:"calico-kube-controllers-84fbdcbd7-", Namespace:"calico-system", SelfLink:"", UID:"bee3324f-10c9-42b6-94be-e2d2eb0c25a7", ResourceVersion:"1303", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 14, 29, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84fbdcbd7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2a0bfad2d401e823be40e7db8df9fae6fa9572b5ffa6f53beb6c64b75a1a4b7a", Pod:"calico-kube-controllers-84fbdcbd7-l8fwr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali924689cf6dd", MAC:"52:e0:0d:b1:ff:fd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 14:31:40.484399 containerd[1557]: 2026-03-02 14:31:40.428 [INFO][4221] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2a0bfad2d401e823be40e7db8df9fae6fa9572b5ffa6f53beb6c64b75a1a4b7a" Namespace="calico-system" Pod="calico-kube-controllers-84fbdcbd7-l8fwr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84fbdcbd7--l8fwr-eth0" Mar 2 14:31:40.494734 systemd[1]: Started sshd@9-10.0.0.7:22-10.0.0.1:57128.service - OpenSSH per-connection server daemon (10.0.0.1:57128). Mar 2 14:31:41.007888 containerd[1557]: time="2026-03-02T14:31:40.973735261Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.3: active requests=0, bytes read=8793087" Mar 2 14:31:41.007888 containerd[1557]: time="2026-03-02T14:31:40.976702156Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:31:41.007888 containerd[1557]: time="2026-03-02T14:31:40.993713311Z" level=info msg="ImageCreate event name:\"sha256:6f60b868a297033aea2daba09eb6f77fb2390c659bbc8dfaaac24f32f5b84e27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:31:41.007888 containerd[1557]: time="2026-03-02T14:31:40.995466381Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.3\" with image id \"sha256:6f60b868a297033aea2daba09eb6f77fb2390c659bbc8dfaaac24f32f5b84e27\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:3d04cd6265f850f0420b413351275ebfd244991b1b9e69c64efe8b4eff45b53f\", size \"10349132\" in 6.592890251s" Mar 2 14:31:41.007888 containerd[1557]: time="2026-03-02T14:31:40.995507808Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.3\" returns image reference \"sha256:6f60b868a297033aea2daba09eb6f77fb2390c659bbc8dfaaac24f32f5b84e27\"" Mar 2 14:31:41.007888 containerd[1557]: time="2026-03-02T14:31:40.996446820Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:3d04cd6265f850f0420b413351275ebfd244991b1b9e69c64efe8b4eff45b53f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:31:41.026230 systemd-networkd[1469]: calia292f387d06: Link UP Mar 2 14:31:41.040283 containerd[1557]: time="2026-03-02T14:31:41.031483712Z" level=info msg="CreateContainer within sandbox \"0e5dc5723c1982a1ba100988deaf865b95af6b535f9683053da71431b84593a3\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 2 14:31:41.046313 systemd-networkd[1469]: calia292f387d06: Gained carrier Mar 2 14:31:41.156287 containerd[1557]: time="2026-03-02T14:31:41.156234801Z" level=info msg="Container f4a58fad4fe2c1fa43673eb1feb22abca52852a260eeb66051013baae3de490b: CDI devices from CRI Config.CDIDevices: []" Mar 2 14:31:41.272679 sshd[4364]: Accepted publickey for core from 10.0.0.1 port 57128 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:31:41.273320 containerd[1557]: time="2026-03-02T14:31:41.271114932Z" level=info msg="connecting to shim 2a0bfad2d401e823be40e7db8df9fae6fa9572b5ffa6f53beb6c64b75a1a4b7a" address="unix:///run/containerd/s/e14517458e77c73df27a2f809665422a2d8b961cf0d76c51efe9f6169a77c045" namespace=k8s.io protocol=ttrpc version=3 Mar 2 14:31:41.303378 sshd-session[4364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:31:41.368678 systemd-logind[1531]: New session 10 of user core. Mar 2 14:31:41.479586 containerd[1557]: 2026-03-02 14:31:39.554 [INFO][4163] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--c7ch9-eth0 coredns-7d764666f9- kube-system 365fbaf9-09d9-4956-996d-d2f0b2639b36 1293 0 2026-03-02 14:27:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-c7ch9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia292f387d06 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="178830cf65b2d2db1943ae086c76895711628dce706471e1efe53c2b6b257694" Namespace="kube-system" Pod="coredns-7d764666f9-c7ch9" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--c7ch9-" Mar 2 14:31:41.479586 containerd[1557]: 2026-03-02 14:31:39.554 [INFO][4163] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="178830cf65b2d2db1943ae086c76895711628dce706471e1efe53c2b6b257694" Namespace="kube-system" Pod="coredns-7d764666f9-c7ch9" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--c7ch9-eth0" Mar 2 14:31:41.479586 containerd[1557]: 2026-03-02 14:31:40.179 [INFO][4303] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="178830cf65b2d2db1943ae086c76895711628dce706471e1efe53c2b6b257694" HandleID="k8s-pod-network.178830cf65b2d2db1943ae086c76895711628dce706471e1efe53c2b6b257694" Workload="localhost-k8s-coredns--7d764666f9--c7ch9-eth0" Mar 2 14:31:41.479898 containerd[1557]: 2026-03-02 14:31:40.238 [INFO][4303] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="178830cf65b2d2db1943ae086c76895711628dce706471e1efe53c2b6b257694" HandleID="k8s-pod-network.178830cf65b2d2db1943ae086c76895711628dce706471e1efe53c2b6b257694" Workload="localhost-k8s-coredns--7d764666f9--c7ch9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000249d00), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-c7ch9", "timestamp":"2026-03-02 14:31:40.179857095 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001b1080)} Mar 2 14:31:41.479898 containerd[1557]: 2026-03-02 14:31:40.238 [INFO][4303] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 14:31:41.479898 containerd[1557]: 2026-03-02 14:31:40.238 [INFO][4303] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 14:31:41.479898 containerd[1557]: 2026-03-02 14:31:40.238 [INFO][4303] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 14:31:41.479898 containerd[1557]: 2026-03-02 14:31:40.265 [INFO][4303] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.178830cf65b2d2db1943ae086c76895711628dce706471e1efe53c2b6b257694" host="localhost" Mar 2 14:31:41.479898 containerd[1557]: 2026-03-02 14:31:40.350 [INFO][4303] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 14:31:41.479898 containerd[1557]: 2026-03-02 14:31:40.567 [INFO][4303] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 14:31:41.479898 containerd[1557]: 2026-03-02 14:31:40.597 [INFO][4303] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 14:31:41.479898 containerd[1557]: 2026-03-02 14:31:40.619 [INFO][4303] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 14:31:41.479898 containerd[1557]: 2026-03-02 14:31:40.619 [INFO][4303] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.178830cf65b2d2db1943ae086c76895711628dce706471e1efe53c2b6b257694" host="localhost" Mar 2 14:31:41.480810 containerd[1557]: 2026-03-02 14:31:40.663 [INFO][4303] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.178830cf65b2d2db1943ae086c76895711628dce706471e1efe53c2b6b257694 Mar 2 14:31:41.480810 containerd[1557]: 2026-03-02 14:31:40.760 [INFO][4303] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.178830cf65b2d2db1943ae086c76895711628dce706471e1efe53c2b6b257694" host="localhost" Mar 2 14:31:41.480810 containerd[1557]: 2026-03-02 14:31:40.816 [INFO][4303] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.178830cf65b2d2db1943ae086c76895711628dce706471e1efe53c2b6b257694" host="localhost" Mar 2 14:31:41.480810 containerd[1557]: 2026-03-02 14:31:40.817 [INFO][4303] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.178830cf65b2d2db1943ae086c76895711628dce706471e1efe53c2b6b257694" host="localhost" Mar 2 14:31:41.480810 containerd[1557]: 2026-03-02 14:31:40.817 [INFO][4303] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 14:31:41.480810 containerd[1557]: 2026-03-02 14:31:40.817 [INFO][4303] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="178830cf65b2d2db1943ae086c76895711628dce706471e1efe53c2b6b257694" HandleID="k8s-pod-network.178830cf65b2d2db1943ae086c76895711628dce706471e1efe53c2b6b257694" Workload="localhost-k8s-coredns--7d764666f9--c7ch9-eth0" Mar 2 14:31:41.506445 containerd[1557]: 2026-03-02 14:31:40.912 [INFO][4163] cni-plugin/k8s.go 418: Populated endpoint ContainerID="178830cf65b2d2db1943ae086c76895711628dce706471e1efe53c2b6b257694" Namespace="kube-system" Pod="coredns-7d764666f9-c7ch9" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--c7ch9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--c7ch9-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"365fbaf9-09d9-4956-996d-d2f0b2639b36", ResourceVersion:"1293", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 14, 27, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-c7ch9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia292f387d06", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 14:31:41.506445 containerd[1557]: 2026-03-02 14:31:40.912 [INFO][4163] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="178830cf65b2d2db1943ae086c76895711628dce706471e1efe53c2b6b257694" Namespace="kube-system" Pod="coredns-7d764666f9-c7ch9" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--c7ch9-eth0" Mar 2 14:31:41.506445 containerd[1557]: 2026-03-02 14:31:40.912 [INFO][4163] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia292f387d06 ContainerID="178830cf65b2d2db1943ae086c76895711628dce706471e1efe53c2b6b257694" Namespace="kube-system" Pod="coredns-7d764666f9-c7ch9" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--c7ch9-eth0" Mar 2 14:31:41.506445 containerd[1557]: 2026-03-02 14:31:41.226 [INFO][4163] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="178830cf65b2d2db1943ae086c76895711628dce706471e1efe53c2b6b257694" Namespace="kube-system" Pod="coredns-7d764666f9-c7ch9" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--c7ch9-eth0" Mar 2 14:31:41.506445 containerd[1557]: 2026-03-02 14:31:41.246 [INFO][4163] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="178830cf65b2d2db1943ae086c76895711628dce706471e1efe53c2b6b257694" Namespace="kube-system" Pod="coredns-7d764666f9-c7ch9" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--c7ch9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--c7ch9-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"365fbaf9-09d9-4956-996d-d2f0b2639b36", ResourceVersion:"1293", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 14, 27, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"178830cf65b2d2db1943ae086c76895711628dce706471e1efe53c2b6b257694", Pod:"coredns-7d764666f9-c7ch9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia292f387d06", MAC:"fa:d3:a4:0c:ef:fa", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 14:31:41.506445 containerd[1557]: 2026-03-02 14:31:41.473 [INFO][4163] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="178830cf65b2d2db1943ae086c76895711628dce706471e1efe53c2b6b257694" Namespace="kube-system" Pod="coredns-7d764666f9-c7ch9" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--c7ch9-eth0" Mar 2 14:31:41.507601 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 2 14:31:41.532329 containerd[1557]: time="2026-03-02T14:31:41.530391503Z" level=info msg="CreateContainer within sandbox \"0e5dc5723c1982a1ba100988deaf865b95af6b535f9683053da71431b84593a3\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f4a58fad4fe2c1fa43673eb1feb22abca52852a260eeb66051013baae3de490b\"" Mar 2 14:31:41.555168 containerd[1557]: time="2026-03-02T14:31:41.554366834Z" level=info msg="StartContainer for \"f4a58fad4fe2c1fa43673eb1feb22abca52852a260eeb66051013baae3de490b\"" Mar 2 14:31:41.617312 containerd[1557]: time="2026-03-02T14:31:41.615466867Z" level=info msg="connecting to shim f4a58fad4fe2c1fa43673eb1feb22abca52852a260eeb66051013baae3de490b" address="unix:///run/containerd/s/6b2f9c0cdef4ca79a7a3e5ce3f9e3cf1872b6258f11dc03aee51ace9fa480583" protocol=ttrpc version=3 Mar 2 14:31:41.733774 systemd-networkd[1469]: cali20fe7f138e7: Link UP Mar 2 14:31:41.735751 systemd-networkd[1469]: cali20fe7f138e7: Gained carrier Mar 2 14:31:41.940571 containerd[1557]: 2026-03-02 14:31:39.784 [INFO][4186] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5889764684--7jfq2-eth0 calico-apiserver-5889764684- calico-system ebb83b34-2219-4519-8564-451e7c7ee41e 1306 0 2026-03-02 14:29:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5889764684 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5889764684-7jfq2 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali20fe7f138e7 [] [] }} ContainerID="9f89b7a794a6650bf8453fddc20564dabcbb3aff7e1d50462ec98de5b5cab205" Namespace="calico-system" Pod="calico-apiserver-5889764684-7jfq2" WorkloadEndpoint="localhost-k8s-calico--apiserver--5889764684--7jfq2-" Mar 2 14:31:41.940571 containerd[1557]: 2026-03-02 14:31:39.784 [INFO][4186] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9f89b7a794a6650bf8453fddc20564dabcbb3aff7e1d50462ec98de5b5cab205" Namespace="calico-system" Pod="calico-apiserver-5889764684-7jfq2" WorkloadEndpoint="localhost-k8s-calico--apiserver--5889764684--7jfq2-eth0" Mar 2 14:31:41.940571 containerd[1557]: 2026-03-02 14:31:40.182 [INFO][4335] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9f89b7a794a6650bf8453fddc20564dabcbb3aff7e1d50462ec98de5b5cab205" HandleID="k8s-pod-network.9f89b7a794a6650bf8453fddc20564dabcbb3aff7e1d50462ec98de5b5cab205" Workload="localhost-k8s-calico--apiserver--5889764684--7jfq2-eth0" Mar 2 14:31:41.940571 containerd[1557]: 2026-03-02 14:31:40.260 [INFO][4335] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9f89b7a794a6650bf8453fddc20564dabcbb3aff7e1d50462ec98de5b5cab205" HandleID="k8s-pod-network.9f89b7a794a6650bf8453fddc20564dabcbb3aff7e1d50462ec98de5b5cab205" Workload="localhost-k8s-calico--apiserver--5889764684--7jfq2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139540), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-5889764684-7jfq2", "timestamp":"2026-03-02 14:31:40.182582862 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002982c0)} Mar 2 14:31:41.940571 containerd[1557]: 2026-03-02 14:31:40.260 [INFO][4335] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 14:31:41.940571 containerd[1557]: 2026-03-02 14:31:40.837 [INFO][4335] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 14:31:41.940571 containerd[1557]: 2026-03-02 14:31:40.837 [INFO][4335] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 14:31:41.940571 containerd[1557]: 2026-03-02 14:31:40.941 [INFO][4335] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9f89b7a794a6650bf8453fddc20564dabcbb3aff7e1d50462ec98de5b5cab205" host="localhost" Mar 2 14:31:41.940571 containerd[1557]: 2026-03-02 14:31:41.176 [INFO][4335] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 14:31:41.940571 containerd[1557]: 2026-03-02 14:31:41.363 [INFO][4335] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 14:31:41.940571 containerd[1557]: 2026-03-02 14:31:41.417 [INFO][4335] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 14:31:41.940571 containerd[1557]: 2026-03-02 14:31:41.474 [INFO][4335] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 14:31:41.940571 containerd[1557]: 2026-03-02 14:31:41.474 [INFO][4335] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9f89b7a794a6650bf8453fddc20564dabcbb3aff7e1d50462ec98de5b5cab205" host="localhost" Mar 2 14:31:41.940571 containerd[1557]: 2026-03-02 14:31:41.516 [INFO][4335] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9f89b7a794a6650bf8453fddc20564dabcbb3aff7e1d50462ec98de5b5cab205 Mar 2 14:31:41.940571 containerd[1557]: 2026-03-02 14:31:41.559 [INFO][4335] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9f89b7a794a6650bf8453fddc20564dabcbb3aff7e1d50462ec98de5b5cab205" host="localhost" Mar 2 14:31:41.940571 containerd[1557]: 2026-03-02 14:31:41.666 [INFO][4335] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.9f89b7a794a6650bf8453fddc20564dabcbb3aff7e1d50462ec98de5b5cab205" host="localhost" Mar 2 14:31:41.940571 containerd[1557]: 2026-03-02 14:31:41.666 [INFO][4335] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.9f89b7a794a6650bf8453fddc20564dabcbb3aff7e1d50462ec98de5b5cab205" host="localhost" Mar 2 14:31:41.940571 containerd[1557]: 2026-03-02 14:31:41.680 [INFO][4335] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 14:31:41.940571 containerd[1557]: 2026-03-02 14:31:41.680 [INFO][4335] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="9f89b7a794a6650bf8453fddc20564dabcbb3aff7e1d50462ec98de5b5cab205" HandleID="k8s-pod-network.9f89b7a794a6650bf8453fddc20564dabcbb3aff7e1d50462ec98de5b5cab205" Workload="localhost-k8s-calico--apiserver--5889764684--7jfq2-eth0" Mar 2 14:31:41.948428 containerd[1557]: 2026-03-02 14:31:41.728 [INFO][4186] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9f89b7a794a6650bf8453fddc20564dabcbb3aff7e1d50462ec98de5b5cab205" Namespace="calico-system" Pod="calico-apiserver-5889764684-7jfq2" WorkloadEndpoint="localhost-k8s-calico--apiserver--5889764684--7jfq2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5889764684--7jfq2-eth0", GenerateName:"calico-apiserver-5889764684-", Namespace:"calico-system", SelfLink:"", UID:"ebb83b34-2219-4519-8564-451e7c7ee41e", ResourceVersion:"1306", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 14, 29, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5889764684", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5889764684-7jfq2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali20fe7f138e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 14:31:41.948428 containerd[1557]: 2026-03-02 14:31:41.729 [INFO][4186] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="9f89b7a794a6650bf8453fddc20564dabcbb3aff7e1d50462ec98de5b5cab205" Namespace="calico-system" Pod="calico-apiserver-5889764684-7jfq2" WorkloadEndpoint="localhost-k8s-calico--apiserver--5889764684--7jfq2-eth0" Mar 2 14:31:41.948428 containerd[1557]: 2026-03-02 14:31:41.729 [INFO][4186] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali20fe7f138e7 ContainerID="9f89b7a794a6650bf8453fddc20564dabcbb3aff7e1d50462ec98de5b5cab205" Namespace="calico-system" Pod="calico-apiserver-5889764684-7jfq2" WorkloadEndpoint="localhost-k8s-calico--apiserver--5889764684--7jfq2-eth0" Mar 2 14:31:41.948428 containerd[1557]: 2026-03-02 14:31:41.735 [INFO][4186] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9f89b7a794a6650bf8453fddc20564dabcbb3aff7e1d50462ec98de5b5cab205" Namespace="calico-system" Pod="calico-apiserver-5889764684-7jfq2" WorkloadEndpoint="localhost-k8s-calico--apiserver--5889764684--7jfq2-eth0" Mar 2 14:31:41.948428 containerd[1557]: 2026-03-02 14:31:41.736 [INFO][4186] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9f89b7a794a6650bf8453fddc20564dabcbb3aff7e1d50462ec98de5b5cab205" Namespace="calico-system" Pod="calico-apiserver-5889764684-7jfq2" WorkloadEndpoint="localhost-k8s-calico--apiserver--5889764684--7jfq2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5889764684--7jfq2-eth0", GenerateName:"calico-apiserver-5889764684-", Namespace:"calico-system", SelfLink:"", UID:"ebb83b34-2219-4519-8564-451e7c7ee41e", ResourceVersion:"1306", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 14, 29, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5889764684", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9f89b7a794a6650bf8453fddc20564dabcbb3aff7e1d50462ec98de5b5cab205", Pod:"calico-apiserver-5889764684-7jfq2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali20fe7f138e7", MAC:"62:b7:db:6c:af:d6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 14:31:41.948428 containerd[1557]: 2026-03-02 14:31:41.862 [INFO][4186] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9f89b7a794a6650bf8453fddc20564dabcbb3aff7e1d50462ec98de5b5cab205" Namespace="calico-system" Pod="calico-apiserver-5889764684-7jfq2" WorkloadEndpoint="localhost-k8s-calico--apiserver--5889764684--7jfq2-eth0" Mar 2 14:31:42.068747 systemd[1]: Started cri-containerd-f4a58fad4fe2c1fa43673eb1feb22abca52852a260eeb66051013baae3de490b.scope - libcontainer container f4a58fad4fe2c1fa43673eb1feb22abca52852a260eeb66051013baae3de490b. Mar 2 14:31:42.220258 containerd[1557]: time="2026-03-02T14:31:42.219900633Z" level=info msg="connecting to shim 178830cf65b2d2db1943ae086c76895711628dce706471e1efe53c2b6b257694" address="unix:///run/containerd/s/4c4658699a524258e8ccfc80c3aa0d21eb67484f5c4a76f6eb3e539d644eeae6" namespace=k8s.io protocol=ttrpc version=3 Mar 2 14:31:42.249689 systemd[1]: Started cri-containerd-2a0bfad2d401e823be40e7db8df9fae6fa9572b5ffa6f53beb6c64b75a1a4b7a.scope - libcontainer container 2a0bfad2d401e823be40e7db8df9fae6fa9572b5ffa6f53beb6c64b75a1a4b7a. Mar 2 14:31:42.266693 systemd-networkd[1469]: cali924689cf6dd: Gained IPv6LL Mar 2 14:31:42.269331 systemd-networkd[1469]: calia292f387d06: Gained IPv6LL Mar 2 14:31:42.612864 systemd-networkd[1469]: cali14b9bad6d70: Link UP Mar 2 14:31:42.613634 systemd-networkd[1469]: cali14b9bad6d70: Gained carrier Mar 2 14:31:42.659505 containerd[1557]: time="2026-03-02T14:31:42.659446628Z" level=info msg="connecting to shim 9f89b7a794a6650bf8453fddc20564dabcbb3aff7e1d50462ec98de5b5cab205" address="unix:///run/containerd/s/8bc99de587f0319ea4488454009bc4dc3d89f905bc84adf8b9bceb30363701d9" namespace=k8s.io protocol=ttrpc version=3 Mar 2 14:31:42.822569 systemd[1]: Started cri-containerd-178830cf65b2d2db1943ae086c76895711628dce706471e1efe53c2b6b257694.scope - libcontainer container 178830cf65b2d2db1943ae086c76895711628dce706471e1efe53c2b6b257694. Mar 2 14:31:42.916523 containerd[1557]: 2026-03-02 14:31:39.500 [INFO][4208] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--2mmb9-eth0 coredns-7d764666f9- kube-system 31ca88c7-1dfe-4b28-9567-7a017c447e6b 1308 0 2026-03-02 14:27:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-2mmb9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali14b9bad6d70 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="3938a832960d4362ea0ad4a5b305471343a236ed3c49eb39c0507bc63f4ecb1c" Namespace="kube-system" Pod="coredns-7d764666f9-2mmb9" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--2mmb9-" Mar 2 14:31:42.916523 containerd[1557]: 2026-03-02 14:31:39.533 [INFO][4208] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3938a832960d4362ea0ad4a5b305471343a236ed3c49eb39c0507bc63f4ecb1c" Namespace="kube-system" Pod="coredns-7d764666f9-2mmb9" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--2mmb9-eth0" Mar 2 14:31:42.916523 containerd[1557]: 2026-03-02 14:31:40.220 [INFO][4301] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3938a832960d4362ea0ad4a5b305471343a236ed3c49eb39c0507bc63f4ecb1c" HandleID="k8s-pod-network.3938a832960d4362ea0ad4a5b305471343a236ed3c49eb39c0507bc63f4ecb1c" Workload="localhost-k8s-coredns--7d764666f9--2mmb9-eth0" Mar 2 14:31:42.916523 containerd[1557]: 2026-03-02 14:31:40.315 [INFO][4301] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3938a832960d4362ea0ad4a5b305471343a236ed3c49eb39c0507bc63f4ecb1c" HandleID="k8s-pod-network.3938a832960d4362ea0ad4a5b305471343a236ed3c49eb39c0507bc63f4ecb1c" Workload="localhost-k8s-coredns--7d764666f9--2mmb9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138c60), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-2mmb9", "timestamp":"2026-03-02 14:31:40.22032869 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0006cc160)} Mar 2 14:31:42.916523 containerd[1557]: 2026-03-02 14:31:40.315 [INFO][4301] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 14:31:42.916523 containerd[1557]: 2026-03-02 14:31:41.667 [INFO][4301] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 14:31:42.916523 containerd[1557]: 2026-03-02 14:31:41.667 [INFO][4301] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 14:31:42.916523 containerd[1557]: 2026-03-02 14:31:41.772 [INFO][4301] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3938a832960d4362ea0ad4a5b305471343a236ed3c49eb39c0507bc63f4ecb1c" host="localhost" Mar 2 14:31:42.916523 containerd[1557]: 2026-03-02 14:31:41.961 [INFO][4301] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 14:31:42.916523 containerd[1557]: 2026-03-02 14:31:42.067 [INFO][4301] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 14:31:42.916523 containerd[1557]: 2026-03-02 14:31:42.119 [INFO][4301] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 14:31:42.916523 containerd[1557]: 2026-03-02 14:31:42.149 [INFO][4301] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 14:31:42.916523 containerd[1557]: 2026-03-02 14:31:42.149 [INFO][4301] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3938a832960d4362ea0ad4a5b305471343a236ed3c49eb39c0507bc63f4ecb1c" host="localhost" Mar 2 14:31:42.916523 containerd[1557]: 2026-03-02 14:31:42.206 [INFO][4301] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3938a832960d4362ea0ad4a5b305471343a236ed3c49eb39c0507bc63f4ecb1c Mar 2 14:31:42.916523 containerd[1557]: 2026-03-02 14:31:42.307 [INFO][4301] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3938a832960d4362ea0ad4a5b305471343a236ed3c49eb39c0507bc63f4ecb1c" host="localhost" Mar 2 14:31:42.916523 containerd[1557]: 2026-03-02 14:31:42.516 [INFO][4301] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.3938a832960d4362ea0ad4a5b305471343a236ed3c49eb39c0507bc63f4ecb1c" host="localhost" Mar 2 14:31:42.916523 containerd[1557]: 2026-03-02 14:31:42.516 [INFO][4301] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.3938a832960d4362ea0ad4a5b305471343a236ed3c49eb39c0507bc63f4ecb1c" host="localhost" Mar 2 14:31:42.916523 containerd[1557]: 2026-03-02 14:31:42.516 [INFO][4301] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 14:31:42.916523 containerd[1557]: 2026-03-02 14:31:42.516 [INFO][4301] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="3938a832960d4362ea0ad4a5b305471343a236ed3c49eb39c0507bc63f4ecb1c" HandleID="k8s-pod-network.3938a832960d4362ea0ad4a5b305471343a236ed3c49eb39c0507bc63f4ecb1c" Workload="localhost-k8s-coredns--7d764666f9--2mmb9-eth0" Mar 2 14:31:42.923576 sshd[4417]: Connection closed by 10.0.0.1 port 57128 Mar 2 14:31:42.918397 sshd-session[4364]: pam_unix(sshd:session): session closed for user core Mar 2 14:31:42.924411 containerd[1557]: 2026-03-02 14:31:42.599 [INFO][4208] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3938a832960d4362ea0ad4a5b305471343a236ed3c49eb39c0507bc63f4ecb1c" Namespace="kube-system" Pod="coredns-7d764666f9-2mmb9" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--2mmb9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--2mmb9-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"31ca88c7-1dfe-4b28-9567-7a017c447e6b", ResourceVersion:"1308", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 14, 27, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-2mmb9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali14b9bad6d70", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 14:31:42.924411 containerd[1557]: 2026-03-02 14:31:42.601 [INFO][4208] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="3938a832960d4362ea0ad4a5b305471343a236ed3c49eb39c0507bc63f4ecb1c" Namespace="kube-system" Pod="coredns-7d764666f9-2mmb9" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--2mmb9-eth0" Mar 2 14:31:42.924411 containerd[1557]: 2026-03-02 14:31:42.601 [INFO][4208] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali14b9bad6d70 ContainerID="3938a832960d4362ea0ad4a5b305471343a236ed3c49eb39c0507bc63f4ecb1c" Namespace="kube-system" Pod="coredns-7d764666f9-2mmb9" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--2mmb9-eth0" Mar 2 14:31:42.924411 containerd[1557]: 2026-03-02 14:31:42.615 [INFO][4208] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3938a832960d4362ea0ad4a5b305471343a236ed3c49eb39c0507bc63f4ecb1c" Namespace="kube-system" Pod="coredns-7d764666f9-2mmb9" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--2mmb9-eth0" Mar 2 14:31:42.924411 containerd[1557]: 2026-03-02 14:31:42.631 [INFO][4208] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3938a832960d4362ea0ad4a5b305471343a236ed3c49eb39c0507bc63f4ecb1c" Namespace="kube-system" Pod="coredns-7d764666f9-2mmb9" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--2mmb9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--2mmb9-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"31ca88c7-1dfe-4b28-9567-7a017c447e6b", ResourceVersion:"1308", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 14, 27, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3938a832960d4362ea0ad4a5b305471343a236ed3c49eb39c0507bc63f4ecb1c", Pod:"coredns-7d764666f9-2mmb9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali14b9bad6d70", MAC:"d6:ef:e6:50:ce:0b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 14:31:42.924411 containerd[1557]: 2026-03-02 14:31:42.820 [INFO][4208] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3938a832960d4362ea0ad4a5b305471343a236ed3c49eb39c0507bc63f4ecb1c" Namespace="kube-system" Pod="coredns-7d764666f9-2mmb9" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--2mmb9-eth0" Mar 2 14:31:42.960562 systemd[1]: sshd@9-10.0.0.7:22-10.0.0.1:57128.service: Deactivated successfully. Mar 2 14:31:42.972701 systemd[1]: session-10.scope: Deactivated successfully. Mar 2 14:31:42.990215 systemd-logind[1531]: Session 10 logged out. Waiting for processes to exit. Mar 2 14:31:42.996807 systemd-logind[1531]: Removed session 10. Mar 2 14:31:42.997515 systemd-resolved[1471]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 14:31:43.041232 systemd[1]: Started cri-containerd-9f89b7a794a6650bf8453fddc20564dabcbb3aff7e1d50462ec98de5b5cab205.scope - libcontainer container 9f89b7a794a6650bf8453fddc20564dabcbb3aff7e1d50462ec98de5b5cab205. Mar 2 14:31:43.182987 systemd-resolved[1471]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 14:31:43.353851 systemd-networkd[1469]: calib9f149ae244: Link UP Mar 2 14:31:43.367661 systemd-networkd[1469]: calib9f149ae244: Gained carrier Mar 2 14:31:43.480914 systemd-resolved[1471]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 14:31:43.670751 containerd[1557]: time="2026-03-02T14:31:43.663309176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-c7ch9,Uid:365fbaf9-09d9-4956-996d-d2f0b2639b36,Namespace:kube-system,Attempt:0,} returns sandbox id \"178830cf65b2d2db1943ae086c76895711628dce706471e1efe53c2b6b257694\"" Mar 2 14:31:43.683147 systemd-networkd[1469]: cali20fe7f138e7: Gained IPv6LL Mar 2 14:31:43.743877 kubelet[2864]: E0302 14:31:43.743362 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:31:43.819323 containerd[1557]: time="2026-03-02T14:31:43.807637022Z" level=info msg="StartContainer for \"f4a58fad4fe2c1fa43673eb1feb22abca52852a260eeb66051013baae3de490b\" returns successfully" Mar 2 14:31:43.849319 containerd[1557]: time="2026-03-02T14:31:43.833551886Z" level=info msg="connecting to shim 3938a832960d4362ea0ad4a5b305471343a236ed3c49eb39c0507bc63f4ecb1c" address="unix:///run/containerd/s/ae53ff202a850e686b340c2b33ad64cfb2f83ac1dd3867eb4a9eab3a1033db51" namespace=k8s.io protocol=ttrpc version=3 Mar 2 14:31:43.849319 containerd[1557]: time="2026-03-02T14:31:43.841202448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\"" Mar 2 14:31:43.849319 containerd[1557]: time="2026-03-02T14:31:43.841806665Z" level=info msg="CreateContainer within sandbox \"178830cf65b2d2db1943ae086c76895711628dce706471e1efe53c2b6b257694\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 2 14:31:43.863258 systemd-networkd[1469]: cali14b9bad6d70: Gained IPv6LL Mar 2 14:31:43.985260 containerd[1557]: 2026-03-02 14:31:39.650 [INFO][4214] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7d7658d587--nd82q-eth0 goldmane-7d7658d587- calico-system 872b5038-60d8-43af-97b0-1e2a061524de 1304 0 2026-03-02 14:29:27 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7d7658d587 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7d7658d587-nd82q eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib9f149ae244 [] [] }} ContainerID="2f4576e682ff77e30b9d3c0da46f1c6f18ae6d1b782700b01bef7ccf922e5ffa" Namespace="calico-system" Pod="goldmane-7d7658d587-nd82q" WorkloadEndpoint="localhost-k8s-goldmane--7d7658d587--nd82q-" Mar 2 14:31:43.985260 containerd[1557]: 2026-03-02 14:31:39.652 [INFO][4214] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2f4576e682ff77e30b9d3c0da46f1c6f18ae6d1b782700b01bef7ccf922e5ffa" Namespace="calico-system" Pod="goldmane-7d7658d587-nd82q" WorkloadEndpoint="localhost-k8s-goldmane--7d7658d587--nd82q-eth0" Mar 2 14:31:43.985260 containerd[1557]: 2026-03-02 14:31:40.262 [INFO][4318] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f4576e682ff77e30b9d3c0da46f1c6f18ae6d1b782700b01bef7ccf922e5ffa" HandleID="k8s-pod-network.2f4576e682ff77e30b9d3c0da46f1c6f18ae6d1b782700b01bef7ccf922e5ffa" Workload="localhost-k8s-goldmane--7d7658d587--nd82q-eth0" Mar 2 14:31:43.985260 containerd[1557]: 2026-03-02 14:31:40.427 [INFO][4318] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="2f4576e682ff77e30b9d3c0da46f1c6f18ae6d1b782700b01bef7ccf922e5ffa" HandleID="k8s-pod-network.2f4576e682ff77e30b9d3c0da46f1c6f18ae6d1b782700b01bef7ccf922e5ffa" Workload="localhost-k8s-goldmane--7d7658d587--nd82q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003863f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7d7658d587-nd82q", "timestamp":"2026-03-02 14:31:40.26281806 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0005d6000)} Mar 2 14:31:43.985260 containerd[1557]: 2026-03-02 14:31:40.427 [INFO][4318] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 14:31:43.985260 containerd[1557]: 2026-03-02 14:31:42.517 [INFO][4318] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 14:31:43.985260 containerd[1557]: 2026-03-02 14:31:42.540 [INFO][4318] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 14:31:43.985260 containerd[1557]: 2026-03-02 14:31:42.618 [INFO][4318] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.2f4576e682ff77e30b9d3c0da46f1c6f18ae6d1b782700b01bef7ccf922e5ffa" host="localhost" Mar 2 14:31:43.985260 containerd[1557]: 2026-03-02 14:31:42.801 [INFO][4318] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 14:31:43.985260 containerd[1557]: 2026-03-02 14:31:42.939 [INFO][4318] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 14:31:43.985260 containerd[1557]: 2026-03-02 14:31:42.959 [INFO][4318] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 14:31:43.985260 containerd[1557]: 2026-03-02 14:31:42.991 [INFO][4318] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 14:31:43.985260 containerd[1557]: 2026-03-02 14:31:42.991 [INFO][4318] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2f4576e682ff77e30b9d3c0da46f1c6f18ae6d1b782700b01bef7ccf922e5ffa" host="localhost" Mar 2 14:31:43.985260 containerd[1557]: 2026-03-02 14:31:43.020 [INFO][4318] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.2f4576e682ff77e30b9d3c0da46f1c6f18ae6d1b782700b01bef7ccf922e5ffa Mar 2 14:31:43.985260 containerd[1557]: 2026-03-02 14:31:43.062 [INFO][4318] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2f4576e682ff77e30b9d3c0da46f1c6f18ae6d1b782700b01bef7ccf922e5ffa" host="localhost" Mar 2 14:31:43.985260 containerd[1557]: 2026-03-02 14:31:43.164 [INFO][4318] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.2f4576e682ff77e30b9d3c0da46f1c6f18ae6d1b782700b01bef7ccf922e5ffa" host="localhost" Mar 2 14:31:43.985260 containerd[1557]: 2026-03-02 14:31:43.164 [INFO][4318] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.2f4576e682ff77e30b9d3c0da46f1c6f18ae6d1b782700b01bef7ccf922e5ffa" host="localhost" Mar 2 14:31:43.985260 containerd[1557]: 2026-03-02 14:31:43.165 [INFO][4318] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 14:31:43.985260 containerd[1557]: 2026-03-02 14:31:43.165 [INFO][4318] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="2f4576e682ff77e30b9d3c0da46f1c6f18ae6d1b782700b01bef7ccf922e5ffa" HandleID="k8s-pod-network.2f4576e682ff77e30b9d3c0da46f1c6f18ae6d1b782700b01bef7ccf922e5ffa" Workload="localhost-k8s-goldmane--7d7658d587--nd82q-eth0" Mar 2 14:31:44.038702 containerd[1557]: 2026-03-02 14:31:43.218 [INFO][4214] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2f4576e682ff77e30b9d3c0da46f1c6f18ae6d1b782700b01bef7ccf922e5ffa" Namespace="calico-system" Pod="goldmane-7d7658d587-nd82q" WorkloadEndpoint="localhost-k8s-goldmane--7d7658d587--nd82q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7d7658d587--nd82q-eth0", GenerateName:"goldmane-7d7658d587-", Namespace:"calico-system", SelfLink:"", UID:"872b5038-60d8-43af-97b0-1e2a061524de", ResourceVersion:"1304", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 14, 29, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7d7658d587", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7d7658d587-nd82q", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib9f149ae244", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 14:31:44.038702 containerd[1557]: 2026-03-02 14:31:43.239 [INFO][4214] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="2f4576e682ff77e30b9d3c0da46f1c6f18ae6d1b782700b01bef7ccf922e5ffa" Namespace="calico-system" Pod="goldmane-7d7658d587-nd82q" WorkloadEndpoint="localhost-k8s-goldmane--7d7658d587--nd82q-eth0" Mar 2 14:31:44.038702 containerd[1557]: 2026-03-02 14:31:43.239 [INFO][4214] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib9f149ae244 ContainerID="2f4576e682ff77e30b9d3c0da46f1c6f18ae6d1b782700b01bef7ccf922e5ffa" Namespace="calico-system" Pod="goldmane-7d7658d587-nd82q" WorkloadEndpoint="localhost-k8s-goldmane--7d7658d587--nd82q-eth0" Mar 2 14:31:44.038702 containerd[1557]: 2026-03-02 14:31:43.377 [INFO][4214] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f4576e682ff77e30b9d3c0da46f1c6f18ae6d1b782700b01bef7ccf922e5ffa" Namespace="calico-system" Pod="goldmane-7d7658d587-nd82q" WorkloadEndpoint="localhost-k8s-goldmane--7d7658d587--nd82q-eth0" Mar 2 14:31:44.038702 containerd[1557]: 2026-03-02 14:31:43.403 [INFO][4214] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2f4576e682ff77e30b9d3c0da46f1c6f18ae6d1b782700b01bef7ccf922e5ffa" Namespace="calico-system" Pod="goldmane-7d7658d587-nd82q" WorkloadEndpoint="localhost-k8s-goldmane--7d7658d587--nd82q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7d7658d587--nd82q-eth0", GenerateName:"goldmane-7d7658d587-", Namespace:"calico-system", SelfLink:"", UID:"872b5038-60d8-43af-97b0-1e2a061524de", ResourceVersion:"1304", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 14, 29, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7d7658d587", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2f4576e682ff77e30b9d3c0da46f1c6f18ae6d1b782700b01bef7ccf922e5ffa", Pod:"goldmane-7d7658d587-nd82q", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib9f149ae244", MAC:"8a:90:87:1e:99:b8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 14:31:44.038702 containerd[1557]: 2026-03-02 14:31:43.619 [INFO][4214] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2f4576e682ff77e30b9d3c0da46f1c6f18ae6d1b782700b01bef7ccf922e5ffa" Namespace="calico-system" Pod="goldmane-7d7658d587-nd82q" WorkloadEndpoint="localhost-k8s-goldmane--7d7658d587--nd82q-eth0" Mar 2 14:31:44.048806 systemd-networkd[1469]: cali99a9aa58c0b: Link UP Mar 2 14:31:44.130639 systemd-networkd[1469]: cali99a9aa58c0b: Gained carrier Mar 2 14:31:44.260802 containerd[1557]: time="2026-03-02T14:31:44.260748044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84fbdcbd7-l8fwr,Uid:bee3324f-10c9-42b6-94be-e2d2eb0c25a7,Namespace:calico-system,Attempt:0,} returns sandbox id \"2a0bfad2d401e823be40e7db8df9fae6fa9572b5ffa6f53beb6c64b75a1a4b7a\"" Mar 2 14:31:44.383516 containerd[1557]: time="2026-03-02T14:31:44.366187692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5889764684-7jfq2,Uid:ebb83b34-2219-4519-8564-451e7c7ee41e,Namespace:calico-system,Attempt:0,} returns sandbox id \"9f89b7a794a6650bf8453fddc20564dabcbb3aff7e1d50462ec98de5b5cab205\"" Mar 2 14:31:44.533246 containerd[1557]: time="2026-03-02T14:31:44.481722233Z" level=info msg="connecting to shim 2f4576e682ff77e30b9d3c0da46f1c6f18ae6d1b782700b01bef7ccf922e5ffa" address="unix:///run/containerd/s/11297980e46c84b0a25e8b6bdf270f9e0f8c0ac753ad0575f6bc8c02c15ee01f" namespace=k8s.io protocol=ttrpc version=3 Mar 2 14:31:44.533246 containerd[1557]: 2026-03-02 14:31:39.483 [INFO][4165] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5889764684--6prtp-eth0 calico-apiserver-5889764684- calico-system 37b4d339-e594-441d-9fd4-8c635ff58006 1301 0 2026-03-02 14:29:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5889764684 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5889764684-6prtp eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali99a9aa58c0b [] [] }} ContainerID="cfafa27c62fd42bd9206f045519c9e7808a95aa415d24576249694063d05fb0c" Namespace="calico-system" Pod="calico-apiserver-5889764684-6prtp" WorkloadEndpoint="localhost-k8s-calico--apiserver--5889764684--6prtp-" Mar 2 14:31:44.533246 containerd[1557]: 2026-03-02 14:31:39.483 [INFO][4165] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cfafa27c62fd42bd9206f045519c9e7808a95aa415d24576249694063d05fb0c" Namespace="calico-system" Pod="calico-apiserver-5889764684-6prtp" WorkloadEndpoint="localhost-k8s-calico--apiserver--5889764684--6prtp-eth0" Mar 2 14:31:44.533246 containerd[1557]: 2026-03-02 14:31:40.355 [INFO][4309] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cfafa27c62fd42bd9206f045519c9e7808a95aa415d24576249694063d05fb0c" HandleID="k8s-pod-network.cfafa27c62fd42bd9206f045519c9e7808a95aa415d24576249694063d05fb0c" Workload="localhost-k8s-calico--apiserver--5889764684--6prtp-eth0" Mar 2 14:31:44.533246 containerd[1557]: 2026-03-02 14:31:40.418 [INFO][4309] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="cfafa27c62fd42bd9206f045519c9e7808a95aa415d24576249694063d05fb0c" HandleID="k8s-pod-network.cfafa27c62fd42bd9206f045519c9e7808a95aa415d24576249694063d05fb0c" Workload="localhost-k8s-calico--apiserver--5889764684--6prtp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004db560), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-5889764684-6prtp", "timestamp":"2026-03-02 14:31:40.355223964 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001982c0)} Mar 2 14:31:44.533246 containerd[1557]: 2026-03-02 14:31:40.419 [INFO][4309] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 14:31:44.533246 containerd[1557]: 2026-03-02 14:31:43.166 [INFO][4309] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 14:31:44.533246 containerd[1557]: 2026-03-02 14:31:43.166 [INFO][4309] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 14:31:44.533246 containerd[1557]: 2026-03-02 14:31:43.208 [INFO][4309] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.cfafa27c62fd42bd9206f045519c9e7808a95aa415d24576249694063d05fb0c" host="localhost" Mar 2 14:31:44.533246 containerd[1557]: 2026-03-02 14:31:43.277 [INFO][4309] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 14:31:44.533246 containerd[1557]: 2026-03-02 14:31:43.362 [INFO][4309] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 14:31:44.533246 containerd[1557]: 2026-03-02 14:31:43.404 [INFO][4309] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 14:31:44.533246 containerd[1557]: 2026-03-02 14:31:43.433 [INFO][4309] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 14:31:44.533246 containerd[1557]: 2026-03-02 14:31:43.433 [INFO][4309] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cfafa27c62fd42bd9206f045519c9e7808a95aa415d24576249694063d05fb0c" host="localhost" Mar 2 14:31:44.533246 containerd[1557]: 2026-03-02 14:31:43.462 [INFO][4309] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.cfafa27c62fd42bd9206f045519c9e7808a95aa415d24576249694063d05fb0c Mar 2 14:31:44.533246 containerd[1557]: 2026-03-02 14:31:43.537 [INFO][4309] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cfafa27c62fd42bd9206f045519c9e7808a95aa415d24576249694063d05fb0c" host="localhost" Mar 2 14:31:44.533246 containerd[1557]: 2026-03-02 14:31:43.647 [INFO][4309] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.cfafa27c62fd42bd9206f045519c9e7808a95aa415d24576249694063d05fb0c" host="localhost" Mar 2 14:31:44.533246 containerd[1557]: 2026-03-02 14:31:43.647 [INFO][4309] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.cfafa27c62fd42bd9206f045519c9e7808a95aa415d24576249694063d05fb0c" host="localhost" Mar 2 14:31:44.533246 containerd[1557]: 2026-03-02 14:31:43.647 [INFO][4309] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 14:31:44.533246 containerd[1557]: 2026-03-02 14:31:43.647 [INFO][4309] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="cfafa27c62fd42bd9206f045519c9e7808a95aa415d24576249694063d05fb0c" HandleID="k8s-pod-network.cfafa27c62fd42bd9206f045519c9e7808a95aa415d24576249694063d05fb0c" Workload="localhost-k8s-calico--apiserver--5889764684--6prtp-eth0" Mar 2 14:31:44.547841 containerd[1557]: 2026-03-02 14:31:43.875 [INFO][4165] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cfafa27c62fd42bd9206f045519c9e7808a95aa415d24576249694063d05fb0c" Namespace="calico-system" Pod="calico-apiserver-5889764684-6prtp" WorkloadEndpoint="localhost-k8s-calico--apiserver--5889764684--6prtp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5889764684--6prtp-eth0", GenerateName:"calico-apiserver-5889764684-", Namespace:"calico-system", SelfLink:"", UID:"37b4d339-e594-441d-9fd4-8c635ff58006", ResourceVersion:"1301", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 14, 29, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5889764684", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5889764684-6prtp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali99a9aa58c0b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 14:31:44.547841 containerd[1557]: 2026-03-02 14:31:43.875 [INFO][4165] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="cfafa27c62fd42bd9206f045519c9e7808a95aa415d24576249694063d05fb0c" Namespace="calico-system" Pod="calico-apiserver-5889764684-6prtp" WorkloadEndpoint="localhost-k8s-calico--apiserver--5889764684--6prtp-eth0" Mar 2 14:31:44.547841 containerd[1557]: 2026-03-02 14:31:43.875 [INFO][4165] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali99a9aa58c0b ContainerID="cfafa27c62fd42bd9206f045519c9e7808a95aa415d24576249694063d05fb0c" Namespace="calico-system" Pod="calico-apiserver-5889764684-6prtp" WorkloadEndpoint="localhost-k8s-calico--apiserver--5889764684--6prtp-eth0" Mar 2 14:31:44.547841 containerd[1557]: 2026-03-02 14:31:44.231 [INFO][4165] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cfafa27c62fd42bd9206f045519c9e7808a95aa415d24576249694063d05fb0c" Namespace="calico-system" Pod="calico-apiserver-5889764684-6prtp" WorkloadEndpoint="localhost-k8s-calico--apiserver--5889764684--6prtp-eth0" Mar 2 14:31:44.547841 containerd[1557]: 2026-03-02 14:31:44.241 [INFO][4165] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cfafa27c62fd42bd9206f045519c9e7808a95aa415d24576249694063d05fb0c" Namespace="calico-system" Pod="calico-apiserver-5889764684-6prtp" WorkloadEndpoint="localhost-k8s-calico--apiserver--5889764684--6prtp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5889764684--6prtp-eth0", GenerateName:"calico-apiserver-5889764684-", Namespace:"calico-system", SelfLink:"", UID:"37b4d339-e594-441d-9fd4-8c635ff58006", ResourceVersion:"1301", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 14, 29, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5889764684", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cfafa27c62fd42bd9206f045519c9e7808a95aa415d24576249694063d05fb0c", Pod:"calico-apiserver-5889764684-6prtp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali99a9aa58c0b", MAC:"de:bb:5e:91:59:98", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 14:31:44.547841 containerd[1557]: 2026-03-02 14:31:44.363 [INFO][4165] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cfafa27c62fd42bd9206f045519c9e7808a95aa415d24576249694063d05fb0c" Namespace="calico-system" Pod="calico-apiserver-5889764684-6prtp" WorkloadEndpoint="localhost-k8s-calico--apiserver--5889764684--6prtp-eth0" Mar 2 14:31:44.558167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1650335829.mount: Deactivated successfully. Mar 2 14:31:44.633189 containerd[1557]: time="2026-03-02T14:31:44.620554376Z" level=info msg="Container 14b622d04f702b1d3a2fb185cb6bebfdb9c3039d79cf4bb3b7eca776a86a416e: CDI devices from CRI Config.CDIDevices: []" Mar 2 14:31:44.628503 systemd-networkd[1469]: calib9f149ae244: Gained IPv6LL Mar 2 14:31:44.706738 systemd-networkd[1469]: cali3d6017a3829: Link UP Mar 2 14:31:44.714775 systemd-networkd[1469]: cali3d6017a3829: Gained carrier Mar 2 14:31:44.872580 containerd[1557]: 2026-03-02 14:31:40.096 [INFO][4188] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--79f4755f69--hg7tg-eth0 whisker-79f4755f69- calico-system 12150212-8273-40bd-997e-9c7946178ab2 1300 0 2026-03-02 14:31:29 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:79f4755f69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-79f4755f69-hg7tg eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali3d6017a3829 [] [] }} ContainerID="ebb773b758dd1c7198fb2aa781cccec9954c4df2c1896f41a7f900ae337f10b8" Namespace="calico-system" Pod="whisker-79f4755f69-hg7tg" WorkloadEndpoint="localhost-k8s-whisker--79f4755f69--hg7tg-" Mar 2 14:31:44.872580 containerd[1557]: 2026-03-02 14:31:40.096 [INFO][4188] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ebb773b758dd1c7198fb2aa781cccec9954c4df2c1896f41a7f900ae337f10b8" Namespace="calico-system" Pod="whisker-79f4755f69-hg7tg" WorkloadEndpoint="localhost-k8s-whisker--79f4755f69--hg7tg-eth0" Mar 2 14:31:44.872580 containerd[1557]: 2026-03-02 14:31:40.438 [INFO][4346] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ebb773b758dd1c7198fb2aa781cccec9954c4df2c1896f41a7f900ae337f10b8" HandleID="k8s-pod-network.ebb773b758dd1c7198fb2aa781cccec9954c4df2c1896f41a7f900ae337f10b8" Workload="localhost-k8s-whisker--79f4755f69--hg7tg-eth0" Mar 2 14:31:44.872580 containerd[1557]: 2026-03-02 14:31:40.468 [INFO][4346] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ebb773b758dd1c7198fb2aa781cccec9954c4df2c1896f41a7f900ae337f10b8" HandleID="k8s-pod-network.ebb773b758dd1c7198fb2aa781cccec9954c4df2c1896f41a7f900ae337f10b8" Workload="localhost-k8s-whisker--79f4755f69--hg7tg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011fbd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-79f4755f69-hg7tg", "timestamp":"2026-03-02 14:31:40.438800589 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0007c0420)} Mar 2 14:31:44.872580 containerd[1557]: 2026-03-02 14:31:40.468 [INFO][4346] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 14:31:44.872580 containerd[1557]: 2026-03-02 14:31:43.654 [INFO][4346] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 14:31:44.872580 containerd[1557]: 2026-03-02 14:31:43.655 [INFO][4346] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 14:31:44.872580 containerd[1557]: 2026-03-02 14:31:43.710 [INFO][4346] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ebb773b758dd1c7198fb2aa781cccec9954c4df2c1896f41a7f900ae337f10b8" host="localhost" Mar 2 14:31:44.872580 containerd[1557]: 2026-03-02 14:31:43.858 [INFO][4346] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 14:31:44.872580 containerd[1557]: 2026-03-02 14:31:44.015 [INFO][4346] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 14:31:44.872580 containerd[1557]: 2026-03-02 14:31:44.109 [INFO][4346] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 14:31:44.872580 containerd[1557]: 2026-03-02 14:31:44.231 [INFO][4346] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 14:31:44.872580 containerd[1557]: 2026-03-02 14:31:44.231 [INFO][4346] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ebb773b758dd1c7198fb2aa781cccec9954c4df2c1896f41a7f900ae337f10b8" host="localhost" Mar 2 14:31:44.872580 containerd[1557]: 2026-03-02 14:31:44.280 [INFO][4346] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ebb773b758dd1c7198fb2aa781cccec9954c4df2c1896f41a7f900ae337f10b8 Mar 2 14:31:44.872580 containerd[1557]: 2026-03-02 14:31:44.344 [INFO][4346] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ebb773b758dd1c7198fb2aa781cccec9954c4df2c1896f41a7f900ae337f10b8" host="localhost" Mar 2 14:31:44.872580 containerd[1557]: 2026-03-02 14:31:44.571 [INFO][4346] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.ebb773b758dd1c7198fb2aa781cccec9954c4df2c1896f41a7f900ae337f10b8" host="localhost" Mar 2 14:31:44.872580 containerd[1557]: 2026-03-02 14:31:44.571 [INFO][4346] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.ebb773b758dd1c7198fb2aa781cccec9954c4df2c1896f41a7f900ae337f10b8" host="localhost" Mar 2 14:31:44.872580 containerd[1557]: 2026-03-02 14:31:44.571 [INFO][4346] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 14:31:44.872580 containerd[1557]: 2026-03-02 14:31:44.571 [INFO][4346] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="ebb773b758dd1c7198fb2aa781cccec9954c4df2c1896f41a7f900ae337f10b8" HandleID="k8s-pod-network.ebb773b758dd1c7198fb2aa781cccec9954c4df2c1896f41a7f900ae337f10b8" Workload="localhost-k8s-whisker--79f4755f69--hg7tg-eth0" Mar 2 14:31:44.864177 systemd[1]: Started cri-containerd-3938a832960d4362ea0ad4a5b305471343a236ed3c49eb39c0507bc63f4ecb1c.scope - libcontainer container 3938a832960d4362ea0ad4a5b305471343a236ed3c49eb39c0507bc63f4ecb1c. Mar 2 14:31:44.874760 containerd[1557]: 2026-03-02 14:31:44.664 [INFO][4188] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ebb773b758dd1c7198fb2aa781cccec9954c4df2c1896f41a7f900ae337f10b8" Namespace="calico-system" Pod="whisker-79f4755f69-hg7tg" WorkloadEndpoint="localhost-k8s-whisker--79f4755f69--hg7tg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--79f4755f69--hg7tg-eth0", GenerateName:"whisker-79f4755f69-", Namespace:"calico-system", SelfLink:"", UID:"12150212-8273-40bd-997e-9c7946178ab2", ResourceVersion:"1300", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 14, 31, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79f4755f69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-79f4755f69-hg7tg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3d6017a3829", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 14:31:44.874760 containerd[1557]: 2026-03-02 14:31:44.666 [INFO][4188] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="ebb773b758dd1c7198fb2aa781cccec9954c4df2c1896f41a7f900ae337f10b8" Namespace="calico-system" Pod="whisker-79f4755f69-hg7tg" WorkloadEndpoint="localhost-k8s-whisker--79f4755f69--hg7tg-eth0" Mar 2 14:31:44.874760 containerd[1557]: 2026-03-02 14:31:44.667 [INFO][4188] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3d6017a3829 ContainerID="ebb773b758dd1c7198fb2aa781cccec9954c4df2c1896f41a7f900ae337f10b8" Namespace="calico-system" Pod="whisker-79f4755f69-hg7tg" WorkloadEndpoint="localhost-k8s-whisker--79f4755f69--hg7tg-eth0" Mar 2 14:31:44.874760 containerd[1557]: 2026-03-02 14:31:44.730 [INFO][4188] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ebb773b758dd1c7198fb2aa781cccec9954c4df2c1896f41a7f900ae337f10b8" Namespace="calico-system" Pod="whisker-79f4755f69-hg7tg" WorkloadEndpoint="localhost-k8s-whisker--79f4755f69--hg7tg-eth0" Mar 2 14:31:44.874760 containerd[1557]: 2026-03-02 14:31:44.736 [INFO][4188] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ebb773b758dd1c7198fb2aa781cccec9954c4df2c1896f41a7f900ae337f10b8" Namespace="calico-system" Pod="whisker-79f4755f69-hg7tg" WorkloadEndpoint="localhost-k8s-whisker--79f4755f69--hg7tg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--79f4755f69--hg7tg-eth0", GenerateName:"whisker-79f4755f69-", Namespace:"calico-system", SelfLink:"", UID:"12150212-8273-40bd-997e-9c7946178ab2", ResourceVersion:"1300", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 14, 31, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79f4755f69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ebb773b758dd1c7198fb2aa781cccec9954c4df2c1896f41a7f900ae337f10b8", Pod:"whisker-79f4755f69-hg7tg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3d6017a3829", MAC:"9e:4d:13:b1:f3:24", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 14:31:44.874760 containerd[1557]: 2026-03-02 14:31:44.823 [INFO][4188] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ebb773b758dd1c7198fb2aa781cccec9954c4df2c1896f41a7f900ae337f10b8" Namespace="calico-system" Pod="whisker-79f4755f69-hg7tg" WorkloadEndpoint="localhost-k8s-whisker--79f4755f69--hg7tg-eth0" Mar 2 14:31:44.910076 containerd[1557]: time="2026-03-02T14:31:44.908774501Z" level=info msg="CreateContainer within sandbox \"178830cf65b2d2db1943ae086c76895711628dce706471e1efe53c2b6b257694\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"14b622d04f702b1d3a2fb185cb6bebfdb9c3039d79cf4bb3b7eca776a86a416e\"" Mar 2 14:31:44.944668 containerd[1557]: time="2026-03-02T14:31:44.940327704Z" level=info msg="StartContainer for \"14b622d04f702b1d3a2fb185cb6bebfdb9c3039d79cf4bb3b7eca776a86a416e\"" Mar 2 14:31:44.993668 containerd[1557]: time="2026-03-02T14:31:44.991810748Z" level=info msg="connecting to shim 14b622d04f702b1d3a2fb185cb6bebfdb9c3039d79cf4bb3b7eca776a86a416e" address="unix:///run/containerd/s/4c4658699a524258e8ccfc80c3aa0d21eb67484f5c4a76f6eb3e539d644eeae6" protocol=ttrpc version=3 Mar 2 14:31:45.023719 systemd[1]: Started cri-containerd-2f4576e682ff77e30b9d3c0da46f1c6f18ae6d1b782700b01bef7ccf922e5ffa.scope - libcontainer container 2f4576e682ff77e30b9d3c0da46f1c6f18ae6d1b782700b01bef7ccf922e5ffa. Mar 2 14:31:45.098991 systemd-resolved[1471]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 14:31:45.201247 containerd[1557]: time="2026-03-02T14:31:45.201200461Z" level=info msg="connecting to shim cfafa27c62fd42bd9206f045519c9e7808a95aa415d24576249694063d05fb0c" address="unix:///run/containerd/s/527e7dcc378396e2120a340566f7d46a2ba35bad839110edfc0a062ef8b8eab0" namespace=k8s.io protocol=ttrpc version=3 Mar 2 14:31:45.219175 systemd-resolved[1471]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 14:31:45.311851 systemd[1]: Started cri-containerd-14b622d04f702b1d3a2fb185cb6bebfdb9c3039d79cf4bb3b7eca776a86a416e.scope - libcontainer container 14b622d04f702b1d3a2fb185cb6bebfdb9c3039d79cf4bb3b7eca776a86a416e. Mar 2 14:31:45.369923 containerd[1557]: time="2026-03-02T14:31:45.355879516Z" level=info msg="connecting to shim ebb773b758dd1c7198fb2aa781cccec9954c4df2c1896f41a7f900ae337f10b8" address="unix:///run/containerd/s/3e20a9036da90a4259a389a4cb112773f5eae7c7ee7ce35f6cfc8aafee2e23e4" namespace=k8s.io protocol=ttrpc version=3 Mar 2 14:31:45.527187 systemd[1]: Started cri-containerd-cfafa27c62fd42bd9206f045519c9e7808a95aa415d24576249694063d05fb0c.scope - libcontainer container cfafa27c62fd42bd9206f045519c9e7808a95aa415d24576249694063d05fb0c. Mar 2 14:31:45.603131 containerd[1557]: time="2026-03-02T14:31:45.600884881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-2mmb9,Uid:31ca88c7-1dfe-4b28-9567-7a017c447e6b,Namespace:kube-system,Attempt:0,} returns sandbox id \"3938a832960d4362ea0ad4a5b305471343a236ed3c49eb39c0507bc63f4ecb1c\"" Mar 2 14:31:45.603263 kubelet[2864]: E0302 14:31:45.602175 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:31:45.727162 containerd[1557]: time="2026-03-02T14:31:45.714817117Z" level=info msg="CreateContainer within sandbox \"3938a832960d4362ea0ad4a5b305471343a236ed3c49eb39c0507bc63f4ecb1c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 2 14:31:45.780843 systemd-networkd[1469]: cali99a9aa58c0b: Gained IPv6LL Mar 2 14:31:45.785535 systemd-resolved[1471]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 14:31:45.824992 containerd[1557]: time="2026-03-02T14:31:45.824404647Z" level=info msg="Container 5b03a03e4fe21b173444f145760cce5f8dd41294cc9066f8ea8bbe65424a531e: CDI devices from CRI Config.CDIDevices: []" Mar 2 14:31:45.829392 containerd[1557]: time="2026-03-02T14:31:45.829354413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7d7658d587-nd82q,Uid:872b5038-60d8-43af-97b0-1e2a061524de,Namespace:calico-system,Attempt:0,} returns sandbox id \"2f4576e682ff77e30b9d3c0da46f1c6f18ae6d1b782700b01bef7ccf922e5ffa\"" Mar 2 14:31:45.980137 containerd[1557]: time="2026-03-02T14:31:45.980095036Z" level=info msg="CreateContainer within sandbox \"3938a832960d4362ea0ad4a5b305471343a236ed3c49eb39c0507bc63f4ecb1c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5b03a03e4fe21b173444f145760cce5f8dd41294cc9066f8ea8bbe65424a531e\"" Mar 2 14:31:46.000291 containerd[1557]: time="2026-03-02T14:31:45.992926266Z" level=info msg="StartContainer for \"5b03a03e4fe21b173444f145760cce5f8dd41294cc9066f8ea8bbe65424a531e\"" Mar 2 14:31:46.035652 systemd-networkd[1469]: cali3d6017a3829: Gained IPv6LL Mar 2 14:31:46.049106 containerd[1557]: time="2026-03-02T14:31:46.048918240Z" level=info msg="connecting to shim 5b03a03e4fe21b173444f145760cce5f8dd41294cc9066f8ea8bbe65424a531e" address="unix:///run/containerd/s/ae53ff202a850e686b340c2b33ad64cfb2f83ac1dd3867eb4a9eab3a1033db51" protocol=ttrpc version=3 Mar 2 14:31:46.057345 systemd[1]: Started cri-containerd-ebb773b758dd1c7198fb2aa781cccec9954c4df2c1896f41a7f900ae337f10b8.scope - libcontainer container ebb773b758dd1c7198fb2aa781cccec9954c4df2c1896f41a7f900ae337f10b8. Mar 2 14:31:46.223713 containerd[1557]: time="2026-03-02T14:31:46.222210876Z" level=info msg="StartContainer for \"14b622d04f702b1d3a2fb185cb6bebfdb9c3039d79cf4bb3b7eca776a86a416e\" returns successfully" Mar 2 14:31:46.226553 systemd[1]: Started cri-containerd-5b03a03e4fe21b173444f145760cce5f8dd41294cc9066f8ea8bbe65424a531e.scope - libcontainer container 5b03a03e4fe21b173444f145760cce5f8dd41294cc9066f8ea8bbe65424a531e. Mar 2 14:31:46.242326 containerd[1557]: time="2026-03-02T14:31:46.241597556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5889764684-6prtp,Uid:37b4d339-e594-441d-9fd4-8c635ff58006,Namespace:calico-system,Attempt:0,} returns sandbox id \"cfafa27c62fd42bd9206f045519c9e7808a95aa415d24576249694063d05fb0c\"" Mar 2 14:31:46.286277 systemd-resolved[1471]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 14:31:46.520610 containerd[1557]: time="2026-03-02T14:31:46.500326685Z" level=info msg="StartContainer for \"5b03a03e4fe21b173444f145760cce5f8dd41294cc9066f8ea8bbe65424a531e\" returns successfully" Mar 2 14:31:46.557613 containerd[1557]: time="2026-03-02T14:31:46.557566577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79f4755f69-hg7tg,Uid:12150212-8273-40bd-997e-9c7946178ab2,Namespace:calico-system,Attempt:0,} returns sandbox id \"ebb773b758dd1c7198fb2aa781cccec9954c4df2c1896f41a7f900ae337f10b8\"" Mar 2 14:31:46.810775 kubelet[2864]: E0302 14:31:46.808695 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:31:46.857507 kubelet[2864]: E0302 14:31:46.856530 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:31:47.065353 kubelet[2864]: I0302 14:31:47.059303 2864 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-c7ch9" podStartSLOduration=233.059285668 podStartE2EDuration="3m53.059285668s" podCreationTimestamp="2026-03-02 14:27:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 14:31:46.923492142 +0000 UTC m=+233.900647269" watchObservedRunningTime="2026-03-02 14:31:47.059285668 +0000 UTC m=+234.036440805" Mar 2 14:31:47.867615 kubelet[2864]: E0302 14:31:47.866702 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:31:47.867615 kubelet[2864]: E0302 14:31:47.867598 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:31:47.980310 systemd[1]: Started sshd@10-10.0.0.7:22-10.0.0.1:57142.service - OpenSSH per-connection server daemon (10.0.0.1:57142). Mar 2 14:31:48.018598 kubelet[2864]: I0302 14:31:48.016904 2864 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-2mmb9" podStartSLOduration=233.016887736 podStartE2EDuration="3m53.016887736s" podCreationTimestamp="2026-03-02 14:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 14:31:47.05993012 +0000 UTC m=+234.037085237" watchObservedRunningTime="2026-03-02 14:31:48.016887736 +0000 UTC m=+234.994042863" Mar 2 14:31:48.234629 sshd[4931]: Accepted publickey for core from 10.0.0.1 port 57142 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:31:48.239628 sshd-session[4931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:31:48.270130 systemd-logind[1531]: New session 11 of user core. Mar 2 14:31:48.298308 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 2 14:31:48.921407 kubelet[2864]: E0302 14:31:48.882421 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:31:48.922681 kubelet[2864]: E0302 14:31:48.885512 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:31:49.119828 sshd[4938]: Connection closed by 10.0.0.1 port 57142 Mar 2 14:31:49.120846 sshd-session[4931]: pam_unix(sshd:session): session closed for user core Mar 2 14:31:49.134470 systemd[1]: sshd@10-10.0.0.7:22-10.0.0.1:57142.service: Deactivated successfully. Mar 2 14:31:49.142668 systemd[1]: session-11.scope: Deactivated successfully. Mar 2 14:31:49.152156 systemd-logind[1531]: Session 11 logged out. Waiting for processes to exit. Mar 2 14:31:49.159426 systemd-logind[1531]: Removed session 11. Mar 2 14:31:49.717176 containerd[1557]: time="2026-03-02T14:31:49.716543047Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:31:49.739521 containerd[1557]: time="2026-03-02T14:31:49.735853591Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3: active requests=0, bytes read=14702266" Mar 2 14:31:49.749908 containerd[1557]: time="2026-03-02T14:31:49.749818552Z" level=info msg="ImageCreate event name:\"sha256:a06d58cceef55662d827ba735c38dc374717b4fe7115379961a819e177ccc50d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:31:49.773236 containerd[1557]: time="2026-03-02T14:31:49.769726753Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:2bdced3111efc84af5b77534155b084a55a3f839010807e7e83e75faefc8cf33\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:31:49.773236 containerd[1557]: time="2026-03-02T14:31:49.770858625Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\" with image id \"sha256:a06d58cceef55662d827ba735c38dc374717b4fe7115379961a819e177ccc50d\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:2bdced3111efc84af5b77534155b084a55a3f839010807e7e83e75faefc8cf33\", size \"16258263\" in 5.929616724s" Mar 2 14:31:49.773236 containerd[1557]: time="2026-03-02T14:31:49.770898009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\" returns image reference \"sha256:a06d58cceef55662d827ba735c38dc374717b4fe7115379961a819e177ccc50d\"" Mar 2 14:31:49.798619 containerd[1557]: time="2026-03-02T14:31:49.796115706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\"" Mar 2 14:31:49.832488 containerd[1557]: time="2026-03-02T14:31:49.832437980Z" level=info msg="CreateContainer within sandbox \"0e5dc5723c1982a1ba100988deaf865b95af6b535f9683053da71431b84593a3\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 2 14:31:49.920157 kubelet[2864]: E0302 14:31:49.915906 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:31:49.922260 containerd[1557]: time="2026-03-02T14:31:49.921618604Z" level=info msg="Container 37aa630e81705d99b23d503062e261fcccd2c62b2c3c1f3cecd98b17f7e45844: CDI devices from CRI Config.CDIDevices: []" Mar 2 14:31:49.924357 kubelet[2864]: E0302 14:31:49.923711 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:31:50.033494 containerd[1557]: time="2026-03-02T14:31:50.032817040Z" level=info msg="CreateContainer within sandbox \"0e5dc5723c1982a1ba100988deaf865b95af6b535f9683053da71431b84593a3\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"37aa630e81705d99b23d503062e261fcccd2c62b2c3c1f3cecd98b17f7e45844\"" Mar 2 14:31:50.040687 containerd[1557]: time="2026-03-02T14:31:50.039900692Z" level=info msg="StartContainer for \"37aa630e81705d99b23d503062e261fcccd2c62b2c3c1f3cecd98b17f7e45844\"" Mar 2 14:31:50.052306 containerd[1557]: time="2026-03-02T14:31:50.049717987Z" level=info msg="connecting to shim 37aa630e81705d99b23d503062e261fcccd2c62b2c3c1f3cecd98b17f7e45844" address="unix:///run/containerd/s/6b2f9c0cdef4ca79a7a3e5ce3f9e3cf1872b6258f11dc03aee51ace9fa480583" protocol=ttrpc version=3 Mar 2 14:31:50.207343 systemd[1]: Started cri-containerd-37aa630e81705d99b23d503062e261fcccd2c62b2c3c1f3cecd98b17f7e45844.scope - libcontainer container 37aa630e81705d99b23d503062e261fcccd2c62b2c3c1f3cecd98b17f7e45844. Mar 2 14:31:50.719553 containerd[1557]: time="2026-03-02T14:31:50.719483086Z" level=info msg="StartContainer for \"37aa630e81705d99b23d503062e261fcccd2c62b2c3c1f3cecd98b17f7e45844\" returns successfully" Mar 2 14:31:51.447814 kubelet[2864]: E0302 14:31:51.447647 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:31:51.705533 kubelet[2864]: I0302 14:31:51.701265 2864 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 2 14:31:51.705533 kubelet[2864]: I0302 14:31:51.701329 2864 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 2 14:31:52.449256 kubelet[2864]: E0302 14:31:52.448614 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:31:54.181606 systemd[1]: Started sshd@11-10.0.0.7:22-10.0.0.1:44572.service - OpenSSH per-connection server daemon (10.0.0.1:44572). Mar 2 14:31:54.520713 sshd[5027]: Accepted publickey for core from 10.0.0.1 port 44572 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:31:54.526585 sshd-session[5027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:31:54.576466 systemd-logind[1531]: New session 12 of user core. Mar 2 14:31:54.615884 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 2 14:31:55.451525 sshd[5030]: Connection closed by 10.0.0.1 port 44572 Mar 2 14:31:55.455439 sshd-session[5027]: pam_unix(sshd:session): session closed for user core Mar 2 14:31:55.477454 systemd[1]: sshd@11-10.0.0.7:22-10.0.0.1:44572.service: Deactivated successfully. Mar 2 14:31:55.497780 systemd[1]: session-12.scope: Deactivated successfully. Mar 2 14:31:55.502739 systemd-logind[1531]: Session 12 logged out. Waiting for processes to exit. Mar 2 14:31:55.509444 systemd-logind[1531]: Removed session 12. Mar 2 14:31:59.933558 kubelet[2864]: I0302 14:31:59.933482 2864 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-5824n" podStartSLOduration=133.540760948 podStartE2EDuration="2m28.933461676s" podCreationTimestamp="2026-03-02 14:29:31 +0000 UTC" firstStartedPulling="2026-03-02 14:31:34.402170362 +0000 UTC m=+221.379325489" lastFinishedPulling="2026-03-02 14:31:49.79487109 +0000 UTC m=+236.772026217" observedRunningTime="2026-03-02 14:31:51.073217954 +0000 UTC m=+238.050373090" watchObservedRunningTime="2026-03-02 14:31:59.933461676 +0000 UTC m=+246.910616803" Mar 2 14:32:00.496283 systemd[1]: Started sshd@12-10.0.0.7:22-10.0.0.1:40296.service - OpenSSH per-connection server daemon (10.0.0.1:40296). Mar 2 14:32:00.692542 sshd[5080]: Accepted publickey for core from 10.0.0.1 port 40296 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:32:00.704799 sshd-session[5080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:32:00.730377 systemd-logind[1531]: New session 13 of user core. Mar 2 14:32:00.750763 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 2 14:32:01.455215 sshd[5083]: Connection closed by 10.0.0.1 port 40296 Mar 2 14:32:01.461699 sshd-session[5080]: pam_unix(sshd:session): session closed for user core Mar 2 14:32:01.528788 systemd[1]: sshd@12-10.0.0.7:22-10.0.0.1:40296.service: Deactivated successfully. Mar 2 14:32:01.551778 systemd[1]: session-13.scope: Deactivated successfully. Mar 2 14:32:01.589557 systemd-logind[1531]: Session 13 logged out. Waiting for processes to exit. Mar 2 14:32:01.599940 systemd-logind[1531]: Removed session 13. Mar 2 14:32:05.292523 containerd[1557]: time="2026-03-02T14:32:05.291756458Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:32:05.295246 containerd[1557]: time="2026-03-02T14:32:05.295206574Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.3: active requests=0, bytes read=52396348" Mar 2 14:32:05.304887 containerd[1557]: time="2026-03-02T14:32:05.304795195Z" level=info msg="ImageCreate event name:\"sha256:95bc8e4bc61e762d7451304ff00b4ebc2aed857d8698340cb94b885328290dfe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:32:05.331432 containerd[1557]: time="2026-03-02T14:32:05.331378655Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:081fd6c3de7754ba9892532b2c7c6cae9ba7bd1cca4c42e4590ee8d0f5a5696b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:32:05.337238 containerd[1557]: time="2026-03-02T14:32:05.333223178Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\" with image id \"sha256:95bc8e4bc61e762d7451304ff00b4ebc2aed857d8698340cb94b885328290dfe\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:081fd6c3de7754ba9892532b2c7c6cae9ba7bd1cca4c42e4590ee8d0f5a5696b\", size \"53952361\" in 15.537059392s" Mar 2 14:32:05.337238 containerd[1557]: time="2026-03-02T14:32:05.333262260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\" returns image reference \"sha256:95bc8e4bc61e762d7451304ff00b4ebc2aed857d8698340cb94b885328290dfe\"" Mar 2 14:32:05.345926 containerd[1557]: time="2026-03-02T14:32:05.343606177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.3\"" Mar 2 14:32:05.466113 containerd[1557]: time="2026-03-02T14:32:05.465675720Z" level=info msg="CreateContainer within sandbox \"2a0bfad2d401e823be40e7db8df9fae6fa9572b5ffa6f53beb6c64b75a1a4b7a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 2 14:32:05.564247 containerd[1557]: time="2026-03-02T14:32:05.562728292Z" level=info msg="Container 7b0e5edc45c8c8cc82b30ddfdfe0ab896757a0c959e5954b4b2ccce3d8dff5e7: CDI devices from CRI Config.CDIDevices: []" Mar 2 14:32:05.574318 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3798597877.mount: Deactivated successfully. Mar 2 14:32:05.675947 containerd[1557]: time="2026-03-02T14:32:05.675808474Z" level=info msg="CreateContainer within sandbox \"2a0bfad2d401e823be40e7db8df9fae6fa9572b5ffa6f53beb6c64b75a1a4b7a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"7b0e5edc45c8c8cc82b30ddfdfe0ab896757a0c959e5954b4b2ccce3d8dff5e7\"" Mar 2 14:32:05.687838 containerd[1557]: time="2026-03-02T14:32:05.683172304Z" level=info msg="StartContainer for \"7b0e5edc45c8c8cc82b30ddfdfe0ab896757a0c959e5954b4b2ccce3d8dff5e7\"" Mar 2 14:32:05.704167 containerd[1557]: time="2026-03-02T14:32:05.700765384Z" level=info msg="connecting to shim 7b0e5edc45c8c8cc82b30ddfdfe0ab896757a0c959e5954b4b2ccce3d8dff5e7" address="unix:///run/containerd/s/e14517458e77c73df27a2f809665422a2d8b961cf0d76c51efe9f6169a77c045" protocol=ttrpc version=3 Mar 2 14:32:05.819670 systemd[1]: Started cri-containerd-7b0e5edc45c8c8cc82b30ddfdfe0ab896757a0c959e5954b4b2ccce3d8dff5e7.scope - libcontainer container 7b0e5edc45c8c8cc82b30ddfdfe0ab896757a0c959e5954b4b2ccce3d8dff5e7. Mar 2 14:32:06.148783 containerd[1557]: time="2026-03-02T14:32:06.148739811Z" level=info msg="StartContainer for \"7b0e5edc45c8c8cc82b30ddfdfe0ab896757a0c959e5954b4b2ccce3d8dff5e7\" returns successfully" Mar 2 14:32:06.433966 kubelet[2864]: I0302 14:32:06.433266 2864 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-84fbdcbd7-l8fwr" podStartSLOduration=133.373034616 podStartE2EDuration="2m34.433246683s" podCreationTimestamp="2026-03-02 14:29:32 +0000 UTC" firstStartedPulling="2026-03-02 14:31:44.279373821 +0000 UTC m=+231.256528948" lastFinishedPulling="2026-03-02 14:32:05.339585897 +0000 UTC m=+252.316741015" observedRunningTime="2026-03-02 14:32:06.421671899 +0000 UTC m=+253.398827036" watchObservedRunningTime="2026-03-02 14:32:06.433246683 +0000 UTC m=+253.410401800" Mar 2 14:32:06.495556 systemd[1]: Started sshd@13-10.0.0.7:22-10.0.0.1:40312.service - OpenSSH per-connection server daemon (10.0.0.1:40312). Mar 2 14:32:07.033122 sshd[5180]: Accepted publickey for core from 10.0.0.1 port 40312 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:32:07.060942 sshd-session[5180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:32:07.153807 systemd-logind[1531]: New session 14 of user core. Mar 2 14:32:07.204947 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 2 14:32:08.464520 sshd[5194]: Connection closed by 10.0.0.1 port 40312 Mar 2 14:32:08.469648 sshd-session[5180]: pam_unix(sshd:session): session closed for user core Mar 2 14:32:08.499358 systemd[1]: sshd@13-10.0.0.7:22-10.0.0.1:40312.service: Deactivated successfully. Mar 2 14:32:08.512727 systemd[1]: session-14.scope: Deactivated successfully. Mar 2 14:32:08.523237 systemd-logind[1531]: Session 14 logged out. Waiting for processes to exit. Mar 2 14:32:08.531713 systemd-logind[1531]: Removed session 14. Mar 2 14:32:13.580513 systemd[1]: Started sshd@14-10.0.0.7:22-10.0.0.1:60876.service - OpenSSH per-connection server daemon (10.0.0.1:60876). Mar 2 14:32:13.981614 sshd[5218]: Accepted publickey for core from 10.0.0.1 port 60876 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:32:14.002617 sshd-session[5218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:32:14.041664 systemd-logind[1531]: New session 15 of user core. Mar 2 14:32:14.116412 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 2 14:32:14.710229 sshd[5221]: Connection closed by 10.0.0.1 port 60876 Mar 2 14:32:14.716152 sshd-session[5218]: pam_unix(sshd:session): session closed for user core Mar 2 14:32:14.751740 systemd-logind[1531]: Session 15 logged out. Waiting for processes to exit. Mar 2 14:32:14.751969 systemd[1]: sshd@14-10.0.0.7:22-10.0.0.1:60876.service: Deactivated successfully. Mar 2 14:32:14.760323 systemd[1]: session-15.scope: Deactivated successfully. Mar 2 14:32:14.766425 systemd-logind[1531]: Removed session 15. Mar 2 14:32:18.104865 containerd[1557]: time="2026-03-02T14:32:18.104801048Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:32:18.111982 containerd[1557]: time="2026-03-02T14:32:18.110391315Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.3: active requests=0, bytes read=48403149" Mar 2 14:32:18.120960 containerd[1557]: time="2026-03-02T14:32:18.119534684Z" level=info msg="ImageCreate event name:\"sha256:ac46eecb3d7f840a860cf32547a175e8efb0ec76cc6ff942e75d49177b70c694\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:32:18.146280 containerd[1557]: time="2026-03-02T14:32:18.144214965Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:c2def03be7412561bd678df17fcf2467cac990dbb42278dcfe193aa5a43128d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:32:18.146678 containerd[1557]: time="2026-03-02T14:32:18.146486341Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.3\" with image id \"sha256:ac46eecb3d7f840a860cf32547a175e8efb0ec76cc6ff942e75d49177b70c694\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:c2def03be7412561bd678df17fcf2467cac990dbb42278dcfe193aa5a43128d4\", size \"49959210\" in 12.80277091s" Mar 2 14:32:18.146678 containerd[1557]: time="2026-03-02T14:32:18.146528970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.3\" returns image reference \"sha256:ac46eecb3d7f840a860cf32547a175e8efb0ec76cc6ff942e75d49177b70c694\"" Mar 2 14:32:18.171941 containerd[1557]: time="2026-03-02T14:32:18.169950052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.3\"" Mar 2 14:32:18.274090 containerd[1557]: time="2026-03-02T14:32:18.271797781Z" level=info msg="CreateContainer within sandbox \"9f89b7a794a6650bf8453fddc20564dabcbb3aff7e1d50462ec98de5b5cab205\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 2 14:32:18.406828 containerd[1557]: time="2026-03-02T14:32:18.406526329Z" level=info msg="Container 829b729276a5ddcd583fe3dbfc8d41b83f5c70ac042ee0ef83ccaac7250a753c: CDI devices from CRI Config.CDIDevices: []" Mar 2 14:32:18.541731 containerd[1557]: time="2026-03-02T14:32:18.534851522Z" level=info msg="CreateContainer within sandbox \"9f89b7a794a6650bf8453fddc20564dabcbb3aff7e1d50462ec98de5b5cab205\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"829b729276a5ddcd583fe3dbfc8d41b83f5c70ac042ee0ef83ccaac7250a753c\"" Mar 2 14:32:18.543908 containerd[1557]: time="2026-03-02T14:32:18.542622889Z" level=info msg="StartContainer for \"829b729276a5ddcd583fe3dbfc8d41b83f5c70ac042ee0ef83ccaac7250a753c\"" Mar 2 14:32:18.606869 containerd[1557]: time="2026-03-02T14:32:18.603501218Z" level=info msg="connecting to shim 829b729276a5ddcd583fe3dbfc8d41b83f5c70ac042ee0ef83ccaac7250a753c" address="unix:///run/containerd/s/8bc99de587f0319ea4488454009bc4dc3d89f905bc84adf8b9bceb30363701d9" protocol=ttrpc version=3 Mar 2 14:32:18.909853 systemd[1]: Started cri-containerd-829b729276a5ddcd583fe3dbfc8d41b83f5c70ac042ee0ef83ccaac7250a753c.scope - libcontainer container 829b729276a5ddcd583fe3dbfc8d41b83f5c70ac042ee0ef83ccaac7250a753c. Mar 2 14:32:19.452701 kubelet[2864]: E0302 14:32:19.452195 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:32:19.558174 containerd[1557]: time="2026-03-02T14:32:19.557857820Z" level=info msg="StartContainer for \"829b729276a5ddcd583fe3dbfc8d41b83f5c70ac042ee0ef83ccaac7250a753c\" returns successfully" Mar 2 14:32:19.778373 systemd[1]: Started sshd@15-10.0.0.7:22-10.0.0.1:60884.service - OpenSSH per-connection server daemon (10.0.0.1:60884). Mar 2 14:32:20.325201 sshd[5272]: Accepted publickey for core from 10.0.0.1 port 60884 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:32:20.366359 sshd-session[5272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:32:20.440791 systemd-logind[1531]: New session 16 of user core. Mar 2 14:32:20.453893 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 2 14:32:20.767437 kubelet[2864]: I0302 14:32:20.767178 2864 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-5889764684-7jfq2" podStartSLOduration=140.996875817 podStartE2EDuration="2m54.767158696s" podCreationTimestamp="2026-03-02 14:29:26 +0000 UTC" firstStartedPulling="2026-03-02 14:31:44.389623377 +0000 UTC m=+231.366778504" lastFinishedPulling="2026-03-02 14:32:18.159906256 +0000 UTC m=+265.137061383" observedRunningTime="2026-03-02 14:32:20.766559478 +0000 UTC m=+267.743714614" watchObservedRunningTime="2026-03-02 14:32:20.767158696 +0000 UTC m=+267.744313833" Mar 2 14:32:21.483133 sshd[5280]: Connection closed by 10.0.0.1 port 60884 Mar 2 14:32:21.484660 sshd-session[5272]: pam_unix(sshd:session): session closed for user core Mar 2 14:32:21.538268 systemd[1]: sshd@15-10.0.0.7:22-10.0.0.1:60884.service: Deactivated successfully. Mar 2 14:32:21.564687 systemd[1]: session-16.scope: Deactivated successfully. Mar 2 14:32:21.579489 systemd-logind[1531]: Session 16 logged out. Waiting for processes to exit. Mar 2 14:32:21.603831 systemd-logind[1531]: Removed session 16. Mar 2 14:32:26.585751 systemd[1]: Started sshd@16-10.0.0.7:22-10.0.0.1:44034.service - OpenSSH per-connection server daemon (10.0.0.1:44034). Mar 2 14:32:27.427580 sshd[5322]: Accepted publickey for core from 10.0.0.1 port 44034 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:32:27.453987 sshd-session[5322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:32:27.504728 systemd-logind[1531]: New session 17 of user core. Mar 2 14:32:27.542245 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 2 14:32:27.861210 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount818447287.mount: Deactivated successfully. Mar 2 14:32:29.272821 sshd[5328]: Connection closed by 10.0.0.1 port 44034 Mar 2 14:32:29.279799 sshd-session[5322]: pam_unix(sshd:session): session closed for user core Mar 2 14:32:29.309618 systemd[1]: sshd@16-10.0.0.7:22-10.0.0.1:44034.service: Deactivated successfully. Mar 2 14:32:29.329693 systemd[1]: session-17.scope: Deactivated successfully. Mar 2 14:32:29.354007 systemd-logind[1531]: Session 17 logged out. Waiting for processes to exit. Mar 2 14:32:29.364414 systemd-logind[1531]: Removed session 17. Mar 2 14:32:33.408745 containerd[1557]: time="2026-03-02T14:32:33.358841213Z" level=warning msg="container event discarded" container=dcbf2a9d0d5b8370b6bfa3731d92612346009134dc1dd7a0038fd5f690ea3f01 type=CONTAINER_CREATED_EVENT Mar 2 14:32:33.436973 containerd[1557]: time="2026-03-02T14:32:33.406817874Z" level=warning msg="container event discarded" container=dcbf2a9d0d5b8370b6bfa3731d92612346009134dc1dd7a0038fd5f690ea3f01 type=CONTAINER_STARTED_EVENT Mar 2 14:32:33.459285 containerd[1557]: time="2026-03-02T14:32:33.454536173Z" level=warning msg="container event discarded" container=69d735b028232b28ef955f790a40675e0955a15045aebd9bb040b2a8c19808b7 type=CONTAINER_CREATED_EVENT Mar 2 14:32:33.459285 containerd[1557]: time="2026-03-02T14:32:33.454588000Z" level=warning msg="container event discarded" container=69d735b028232b28ef955f790a40675e0955a15045aebd9bb040b2a8c19808b7 type=CONTAINER_STARTED_EVENT Mar 2 14:32:33.483220 containerd[1557]: time="2026-03-02T14:32:33.477551647Z" level=warning msg="container event discarded" container=0c3a28bfcf043aa5eb279ab8cbc218691858c9d0bc8003f21a9f8e235241ae00 type=CONTAINER_CREATED_EVENT Mar 2 14:32:33.483220 containerd[1557]: time="2026-03-02T14:32:33.477601259Z" level=warning msg="container event discarded" container=0c3a28bfcf043aa5eb279ab8cbc218691858c9d0bc8003f21a9f8e235241ae00 type=CONTAINER_STARTED_EVENT Mar 2 14:32:33.532262 containerd[1557]: time="2026-03-02T14:32:33.530250089Z" level=warning msg="container event discarded" container=baf45ec90b2b232c8cae0808b8b5a99986ffb0cca9e83ccbf5cf182486726cc5 type=CONTAINER_CREATED_EVENT Mar 2 14:32:33.595638 containerd[1557]: time="2026-03-02T14:32:33.595385088Z" level=warning msg="container event discarded" container=23e6d9004854bbfa831875c970d768d162a36a0589b7c493726a1bc23a8d8504 type=CONTAINER_CREATED_EVENT Mar 2 14:32:33.694381 containerd[1557]: time="2026-03-02T14:32:33.691906289Z" level=warning msg="container event discarded" container=44ec253baaf4d3170637fd4d3ea8794bbbfe648529987f8661dc1ce4ed5ae9ea type=CONTAINER_CREATED_EVENT Mar 2 14:32:34.021444 containerd[1557]: time="2026-03-02T14:32:34.019977140Z" level=warning msg="container event discarded" container=23e6d9004854bbfa831875c970d768d162a36a0589b7c493726a1bc23a8d8504 type=CONTAINER_STARTED_EVENT Mar 2 14:32:34.076223 containerd[1557]: time="2026-03-02T14:32:34.075340207Z" level=warning msg="container event discarded" container=baf45ec90b2b232c8cae0808b8b5a99986ffb0cca9e83ccbf5cf182486726cc5 type=CONTAINER_STARTED_EVENT Mar 2 14:32:34.120393 containerd[1557]: time="2026-03-02T14:32:34.117437481Z" level=warning msg="container event discarded" container=44ec253baaf4d3170637fd4d3ea8794bbbfe648529987f8661dc1ce4ed5ae9ea type=CONTAINER_STARTED_EVENT Mar 2 14:32:34.329328 systemd[1]: Started sshd@17-10.0.0.7:22-10.0.0.1:41984.service - OpenSSH per-connection server daemon (10.0.0.1:41984). Mar 2 14:32:34.662861 sshd[5374]: Accepted publickey for core from 10.0.0.1 port 41984 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:32:34.677771 sshd-session[5374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:32:34.732960 systemd-logind[1531]: New session 18 of user core. Mar 2 14:32:34.759753 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 2 14:32:35.686139 sshd[5377]: Connection closed by 10.0.0.1 port 41984 Mar 2 14:32:35.684777 sshd-session[5374]: pam_unix(sshd:session): session closed for user core Mar 2 14:32:35.706242 systemd[1]: sshd@17-10.0.0.7:22-10.0.0.1:41984.service: Deactivated successfully. Mar 2 14:32:35.716637 systemd[1]: session-18.scope: Deactivated successfully. Mar 2 14:32:35.726970 systemd-logind[1531]: Session 18 logged out. Waiting for processes to exit. Mar 2 14:32:35.732242 systemd-logind[1531]: Removed session 18. Mar 2 14:32:36.149554 containerd[1557]: time="2026-03-02T14:32:36.148702721Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:32:36.157514 containerd[1557]: time="2026-03-02T14:32:36.155830668Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.3: active requests=0, bytes read=55607954" Mar 2 14:32:36.163932 containerd[1557]: time="2026-03-02T14:32:36.163862722Z" level=info msg="ImageCreate event name:\"sha256:6eaae458d5f115c04bbd6cd0facdbc393958d24af9934b90825fea68960a2f1a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:32:36.189564 containerd[1557]: time="2026-03-02T14:32:36.184938328Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.3\" with image id \"sha256:6eaae458d5f115c04bbd6cd0facdbc393958d24af9934b90825fea68960a2f1a\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:e85ffa1d9468908b0bd44664de0d023da6669faefb3e1013b3a15b63dfa1f9a9\", size \"55607800\" in 18.014937461s" Mar 2 14:32:36.189564 containerd[1557]: time="2026-03-02T14:32:36.184988192Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.3\" returns image reference \"sha256:6eaae458d5f115c04bbd6cd0facdbc393958d24af9934b90825fea68960a2f1a\"" Mar 2 14:32:36.189564 containerd[1557]: time="2026-03-02T14:32:36.187860017Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:e85ffa1d9468908b0bd44664de0d023da6669faefb3e1013b3a15b63dfa1f9a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:32:36.202867 containerd[1557]: time="2026-03-02T14:32:36.202826045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.3\"" Mar 2 14:32:36.226621 containerd[1557]: time="2026-03-02T14:32:36.226440923Z" level=info msg="CreateContainer within sandbox \"2f4576e682ff77e30b9d3c0da46f1c6f18ae6d1b782700b01bef7ccf922e5ffa\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 2 14:32:36.322419 containerd[1557]: time="2026-03-02T14:32:36.320452278Z" level=info msg="Container 7ab0a967fb601d84dd4434fdc41eb84d0910e9cf1abb3b9fdc5ad64597e3cdea: CDI devices from CRI Config.CDIDevices: []" Mar 2 14:32:36.473531 containerd[1557]: time="2026-03-02T14:32:36.470277613Z" level=info msg="CreateContainer within sandbox \"2f4576e682ff77e30b9d3c0da46f1c6f18ae6d1b782700b01bef7ccf922e5ffa\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"7ab0a967fb601d84dd4434fdc41eb84d0910e9cf1abb3b9fdc5ad64597e3cdea\"" Mar 2 14:32:36.473742 containerd[1557]: time="2026-03-02T14:32:36.473528236Z" level=info msg="StartContainer for \"7ab0a967fb601d84dd4434fdc41eb84d0910e9cf1abb3b9fdc5ad64597e3cdea\"" Mar 2 14:32:36.484824 containerd[1557]: time="2026-03-02T14:32:36.475760922Z" level=info msg="connecting to shim 7ab0a967fb601d84dd4434fdc41eb84d0910e9cf1abb3b9fdc5ad64597e3cdea" address="unix:///run/containerd/s/11297980e46c84b0a25e8b6bdf270f9e0f8c0ac753ad0575f6bc8c02c15ee01f" protocol=ttrpc version=3 Mar 2 14:32:36.692942 systemd[1]: Started cri-containerd-7ab0a967fb601d84dd4434fdc41eb84d0910e9cf1abb3b9fdc5ad64597e3cdea.scope - libcontainer container 7ab0a967fb601d84dd4434fdc41eb84d0910e9cf1abb3b9fdc5ad64597e3cdea. Mar 2 14:32:36.700208 containerd[1557]: time="2026-03-02T14:32:36.699634629Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:32:36.701833 containerd[1557]: time="2026-03-02T14:32:36.701804836Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.3: active requests=0, bytes read=77" Mar 2 14:32:36.722607 containerd[1557]: time="2026-03-02T14:32:36.722559825Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.3\" with image id \"sha256:ac46eecb3d7f840a860cf32547a175e8efb0ec76cc6ff942e75d49177b70c694\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:c2def03be7412561bd678df17fcf2467cac990dbb42278dcfe193aa5a43128d4\", size \"49959210\" in 516.89319ms" Mar 2 14:32:36.722990 containerd[1557]: time="2026-03-02T14:32:36.722962276Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.3\" returns image reference \"sha256:ac46eecb3d7f840a860cf32547a175e8efb0ec76cc6ff942e75d49177b70c694\"" Mar 2 14:32:36.737463 containerd[1557]: time="2026-03-02T14:32:36.734953008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.3\"" Mar 2 14:32:36.754877 containerd[1557]: time="2026-03-02T14:32:36.754656021Z" level=info msg="CreateContainer within sandbox \"cfafa27c62fd42bd9206f045519c9e7808a95aa415d24576249694063d05fb0c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 2 14:32:36.819687 containerd[1557]: time="2026-03-02T14:32:36.819490460Z" level=info msg="Container ce4e8194c3ebce45ee52cb57765b465dfd93affd981ca7e624e7e0be182a024f: CDI devices from CRI Config.CDIDevices: []" Mar 2 14:32:36.853898 containerd[1557]: time="2026-03-02T14:32:36.849371028Z" level=info msg="CreateContainer within sandbox \"cfafa27c62fd42bd9206f045519c9e7808a95aa415d24576249694063d05fb0c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ce4e8194c3ebce45ee52cb57765b465dfd93affd981ca7e624e7e0be182a024f\"" Mar 2 14:32:36.853898 containerd[1557]: time="2026-03-02T14:32:36.850724146Z" level=info msg="StartContainer for \"ce4e8194c3ebce45ee52cb57765b465dfd93affd981ca7e624e7e0be182a024f\"" Mar 2 14:32:36.853898 containerd[1557]: time="2026-03-02T14:32:36.853690300Z" level=info msg="connecting to shim ce4e8194c3ebce45ee52cb57765b465dfd93affd981ca7e624e7e0be182a024f" address="unix:///run/containerd/s/527e7dcc378396e2120a340566f7d46a2ba35bad839110edfc0a062ef8b8eab0" protocol=ttrpc version=3 Mar 2 14:32:36.974471 systemd[1]: Started cri-containerd-ce4e8194c3ebce45ee52cb57765b465dfd93affd981ca7e624e7e0be182a024f.scope - libcontainer container ce4e8194c3ebce45ee52cb57765b465dfd93affd981ca7e624e7e0be182a024f. Mar 2 14:32:37.126308 containerd[1557]: time="2026-03-02T14:32:37.125665343Z" level=info msg="StartContainer for \"7ab0a967fb601d84dd4434fdc41eb84d0910e9cf1abb3b9fdc5ad64597e3cdea\" returns successfully" Mar 2 14:32:37.313848 containerd[1557]: time="2026-03-02T14:32:37.313798951Z" level=info msg="StartContainer for \"ce4e8194c3ebce45ee52cb57765b465dfd93affd981ca7e624e7e0be182a024f\" returns successfully" Mar 2 14:32:38.287597 kubelet[2864]: I0302 14:32:38.287482 2864 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-7d7658d587-nd82q" podStartSLOduration=140.930180481 podStartE2EDuration="3m11.287461151s" podCreationTimestamp="2026-03-02 14:29:27 +0000 UTC" firstStartedPulling="2026-03-02 14:31:45.84015394 +0000 UTC m=+232.817309057" lastFinishedPulling="2026-03-02 14:32:36.1974346 +0000 UTC m=+283.174589727" observedRunningTime="2026-03-02 14:32:37.266910261 +0000 UTC m=+284.244065398" watchObservedRunningTime="2026-03-02 14:32:38.287461151 +0000 UTC m=+285.264616278" Mar 2 14:32:39.448660 kubelet[2864]: E0302 14:32:39.448558 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:32:39.489978 containerd[1557]: time="2026-03-02T14:32:39.488784791Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:32:39.494762 containerd[1557]: time="2026-03-02T14:32:39.494724632Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.3: active requests=0, bytes read=6036825" Mar 2 14:32:39.503263 containerd[1557]: time="2026-03-02T14:32:39.502922283Z" level=info msg="ImageCreate event name:\"sha256:a4bcedf3b244f5fd0077952f436fd9486e0e6b974a358c85a962b60303e94c02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:32:39.513480 containerd[1557]: time="2026-03-02T14:32:39.513366366Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:3a388b567fff5cc31c64399d4af0fd03d2f4d243ef26e6f6b77a49386dbadeca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:32:39.550434 containerd[1557]: time="2026-03-02T14:32:39.550286201Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.3\" with image id \"sha256:a4bcedf3b244f5fd0077952f436fd9486e0e6b974a358c85a962b60303e94c02\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:3a388b567fff5cc31c64399d4af0fd03d2f4d243ef26e6f6b77a49386dbadeca\", size \"7592862\" in 2.811798563s" Mar 2 14:32:39.550434 containerd[1557]: time="2026-03-02T14:32:39.550396197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.3\" returns image reference \"sha256:a4bcedf3b244f5fd0077952f436fd9486e0e6b974a358c85a962b60303e94c02\"" Mar 2 14:32:39.594735 containerd[1557]: time="2026-03-02T14:32:39.591559448Z" level=info msg="CreateContainer within sandbox \"ebb773b758dd1c7198fb2aa781cccec9954c4df2c1896f41a7f900ae337f10b8\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 2 14:32:39.644214 containerd[1557]: time="2026-03-02T14:32:39.643878586Z" level=info msg="Container 2aef88b9dc890f78cd886f74ef4f5cd45a575604f42afd1ed27398be81b13d2e: CDI devices from CRI Config.CDIDevices: []" Mar 2 14:32:39.659269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount935084692.mount: Deactivated successfully. Mar 2 14:32:39.693534 containerd[1557]: time="2026-03-02T14:32:39.693248490Z" level=info msg="CreateContainer within sandbox \"ebb773b758dd1c7198fb2aa781cccec9954c4df2c1896f41a7f900ae337f10b8\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"2aef88b9dc890f78cd886f74ef4f5cd45a575604f42afd1ed27398be81b13d2e\"" Mar 2 14:32:39.696159 containerd[1557]: time="2026-03-02T14:32:39.695854562Z" level=info msg="StartContainer for \"2aef88b9dc890f78cd886f74ef4f5cd45a575604f42afd1ed27398be81b13d2e\"" Mar 2 14:32:39.702868 containerd[1557]: time="2026-03-02T14:32:39.699817706Z" level=info msg="connecting to shim 2aef88b9dc890f78cd886f74ef4f5cd45a575604f42afd1ed27398be81b13d2e" address="unix:///run/containerd/s/3e20a9036da90a4259a389a4cb112773f5eae7c7ee7ce35f6cfc8aafee2e23e4" protocol=ttrpc version=3 Mar 2 14:32:39.798390 systemd[1]: Started cri-containerd-2aef88b9dc890f78cd886f74ef4f5cd45a575604f42afd1ed27398be81b13d2e.scope - libcontainer container 2aef88b9dc890f78cd886f74ef4f5cd45a575604f42afd1ed27398be81b13d2e. Mar 2 14:32:40.025591 containerd[1557]: time="2026-03-02T14:32:40.024890515Z" level=info msg="StartContainer for \"2aef88b9dc890f78cd886f74ef4f5cd45a575604f42afd1ed27398be81b13d2e\" returns successfully" Mar 2 14:32:40.106692 containerd[1557]: time="2026-03-02T14:32:40.106612457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\"" Mar 2 14:32:40.760943 systemd[1]: Started sshd@18-10.0.0.7:22-10.0.0.1:38362.service - OpenSSH per-connection server daemon (10.0.0.1:38362). Mar 2 14:32:41.176794 sshd[5640]: Accepted publickey for core from 10.0.0.1 port 38362 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:32:41.190651 sshd-session[5640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:32:41.238829 systemd-logind[1531]: New session 19 of user core. Mar 2 14:32:41.266856 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 2 14:32:42.184694 sshd[5645]: Connection closed by 10.0.0.1 port 38362 Mar 2 14:32:42.186495 sshd-session[5640]: pam_unix(sshd:session): session closed for user core Mar 2 14:32:42.202670 systemd[1]: sshd@18-10.0.0.7:22-10.0.0.1:38362.service: Deactivated successfully. Mar 2 14:32:42.209558 systemd[1]: session-19.scope: Deactivated successfully. Mar 2 14:32:42.213598 systemd-logind[1531]: Session 19 logged out. Waiting for processes to exit. Mar 2 14:32:42.217910 systemd-logind[1531]: Removed session 19. Mar 2 14:32:43.063844 kubelet[2864]: I0302 14:32:43.063775 2864 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-5889764684-6prtp" podStartSLOduration=146.60444944 podStartE2EDuration="3m17.06375157s" podCreationTimestamp="2026-03-02 14:29:26 +0000 UTC" firstStartedPulling="2026-03-02 14:31:46.270362192 +0000 UTC m=+233.247517309" lastFinishedPulling="2026-03-02 14:32:36.729664312 +0000 UTC m=+283.706819439" observedRunningTime="2026-03-02 14:32:38.288754303 +0000 UTC m=+285.265909450" watchObservedRunningTime="2026-03-02 14:32:43.06375157 +0000 UTC m=+290.040906687" Mar 2 14:32:43.792910 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3054858974.mount: Deactivated successfully. Mar 2 14:32:43.910547 containerd[1557]: time="2026-03-02T14:32:43.909831132Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:32:43.924294 containerd[1557]: time="2026-03-02T14:32:43.923810631Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.3: active requests=0, bytes read=17599119" Mar 2 14:32:43.929318 containerd[1557]: time="2026-03-02T14:32:43.928789960Z" level=info msg="ImageCreate event name:\"sha256:fd911f8f9ea58b19b827b1f51a4c19e899291759aca4ed03c388788897668b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:32:43.942261 containerd[1557]: time="2026-03-02T14:32:43.939352184Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:359cb5c751e049ac0bb62c4f7e49b1ac81c59935c70715f5ff4c39a757bf9f38\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 14:32:43.942261 containerd[1557]: time="2026-03-02T14:32:43.941351434Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\" with image id \"sha256:fd911f8f9ea58b19b827b1f51a4c19e899291759aca4ed03c388788897668b8f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:359cb5c751e049ac0bb62c4f7e49b1ac81c59935c70715f5ff4c39a757bf9f38\", size \"17598949\" in 3.834685618s" Mar 2 14:32:43.942261 containerd[1557]: time="2026-03-02T14:32:43.941573838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\" returns image reference \"sha256:fd911f8f9ea58b19b827b1f51a4c19e899291759aca4ed03c388788897668b8f\"" Mar 2 14:32:43.979948 containerd[1557]: time="2026-03-02T14:32:43.979774402Z" level=info msg="CreateContainer within sandbox \"ebb773b758dd1c7198fb2aa781cccec9954c4df2c1896f41a7f900ae337f10b8\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 2 14:32:44.092821 containerd[1557]: time="2026-03-02T14:32:44.092591713Z" level=info msg="Container 5ac1f8cfba433191cdc33f7d4788125ad3df5537c6473e76c7af87b61920615d: CDI devices from CRI Config.CDIDevices: []" Mar 2 14:32:44.132885 containerd[1557]: time="2026-03-02T14:32:44.132764958Z" level=info msg="CreateContainer within sandbox \"ebb773b758dd1c7198fb2aa781cccec9954c4df2c1896f41a7f900ae337f10b8\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"5ac1f8cfba433191cdc33f7d4788125ad3df5537c6473e76c7af87b61920615d\"" Mar 2 14:32:44.135794 containerd[1557]: time="2026-03-02T14:32:44.135762618Z" level=info msg="StartContainer for \"5ac1f8cfba433191cdc33f7d4788125ad3df5537c6473e76c7af87b61920615d\"" Mar 2 14:32:44.142578 containerd[1557]: time="2026-03-02T14:32:44.142539883Z" level=info msg="connecting to shim 5ac1f8cfba433191cdc33f7d4788125ad3df5537c6473e76c7af87b61920615d" address="unix:///run/containerd/s/3e20a9036da90a4259a389a4cb112773f5eae7c7ee7ce35f6cfc8aafee2e23e4" protocol=ttrpc version=3 Mar 2 14:32:44.247733 systemd[1]: Started cri-containerd-5ac1f8cfba433191cdc33f7d4788125ad3df5537c6473e76c7af87b61920615d.scope - libcontainer container 5ac1f8cfba433191cdc33f7d4788125ad3df5537c6473e76c7af87b61920615d. Mar 2 14:32:44.532369 containerd[1557]: time="2026-03-02T14:32:44.531252364Z" level=info msg="StartContainer for \"5ac1f8cfba433191cdc33f7d4788125ad3df5537c6473e76c7af87b61920615d\" returns successfully" Mar 2 14:32:45.400261 kubelet[2864]: I0302 14:32:45.398690 2864 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-79f4755f69-hg7tg" podStartSLOduration=19.014876574 podStartE2EDuration="1m16.398672537s" podCreationTimestamp="2026-03-02 14:31:29 +0000 UTC" firstStartedPulling="2026-03-02 14:31:46.566495045 +0000 UTC m=+233.543650172" lastFinishedPulling="2026-03-02 14:32:43.950291018 +0000 UTC m=+290.927446135" observedRunningTime="2026-03-02 14:32:45.391458003 +0000 UTC m=+292.368613140" watchObservedRunningTime="2026-03-02 14:32:45.398672537 +0000 UTC m=+292.375827664" Mar 2 14:32:47.265574 systemd[1]: Started sshd@19-10.0.0.7:22-10.0.0.1:38372.service - OpenSSH per-connection server daemon (10.0.0.1:38372). Mar 2 14:32:47.733746 sshd[5711]: Accepted publickey for core from 10.0.0.1 port 38372 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:32:47.745882 sshd-session[5711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:32:47.763931 systemd-logind[1531]: New session 20 of user core. Mar 2 14:32:47.783686 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 2 14:32:48.649168 sshd[5714]: Connection closed by 10.0.0.1 port 38372 Mar 2 14:32:48.650769 sshd-session[5711]: pam_unix(sshd:session): session closed for user core Mar 2 14:32:48.672398 systemd[1]: sshd@19-10.0.0.7:22-10.0.0.1:38372.service: Deactivated successfully. Mar 2 14:32:48.685889 systemd[1]: session-20.scope: Deactivated successfully. Mar 2 14:32:48.692299 systemd-logind[1531]: Session 20 logged out. Waiting for processes to exit. Mar 2 14:32:48.700521 systemd-logind[1531]: Removed session 20. Mar 2 14:32:50.488386 kubelet[2864]: E0302 14:32:50.487756 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:32:51.455582 kubelet[2864]: E0302 14:32:51.451806 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:32:53.716752 systemd[1]: Started sshd@20-10.0.0.7:22-10.0.0.1:38846.service - OpenSSH per-connection server daemon (10.0.0.1:38846). Mar 2 14:32:54.017819 sshd[5729]: Accepted publickey for core from 10.0.0.1 port 38846 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:32:54.037704 sshd-session[5729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:32:54.103615 systemd-logind[1531]: New session 21 of user core. Mar 2 14:32:54.126379 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 2 14:32:54.859567 sshd[5736]: Connection closed by 10.0.0.1 port 38846 Mar 2 14:32:54.860443 sshd-session[5729]: pam_unix(sshd:session): session closed for user core Mar 2 14:32:54.889663 systemd[1]: sshd@20-10.0.0.7:22-10.0.0.1:38846.service: Deactivated successfully. Mar 2 14:32:54.916847 systemd[1]: session-21.scope: Deactivated successfully. Mar 2 14:32:54.932731 systemd-logind[1531]: Session 21 logged out. Waiting for processes to exit. Mar 2 14:32:54.940954 systemd[1]: Started sshd@21-10.0.0.7:22-10.0.0.1:38862.service - OpenSSH per-connection server daemon (10.0.0.1:38862). Mar 2 14:32:54.953668 systemd-logind[1531]: Removed session 21. Mar 2 14:32:55.264816 sshd[5754]: Accepted publickey for core from 10.0.0.1 port 38862 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:32:55.273699 sshd-session[5754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:32:55.339957 systemd-logind[1531]: New session 22 of user core. Mar 2 14:32:55.358697 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 2 14:32:56.428545 sshd[5757]: Connection closed by 10.0.0.1 port 38862 Mar 2 14:32:56.430469 sshd-session[5754]: pam_unix(sshd:session): session closed for user core Mar 2 14:32:56.466591 systemd[1]: sshd@21-10.0.0.7:22-10.0.0.1:38862.service: Deactivated successfully. Mar 2 14:32:56.471545 systemd[1]: session-22.scope: Deactivated successfully. Mar 2 14:32:56.476346 systemd-logind[1531]: Session 22 logged out. Waiting for processes to exit. Mar 2 14:32:56.492811 systemd[1]: Started sshd@22-10.0.0.7:22-10.0.0.1:38872.service - OpenSSH per-connection server daemon (10.0.0.1:38872). Mar 2 14:32:56.533557 systemd-logind[1531]: Removed session 22. Mar 2 14:32:56.864808 sshd[5778]: Accepted publickey for core from 10.0.0.1 port 38872 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:32:56.869594 sshd-session[5778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:32:56.940307 systemd-logind[1531]: New session 23 of user core. Mar 2 14:32:56.971371 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 2 14:32:57.868864 sshd[5785]: Connection closed by 10.0.0.1 port 38872 Mar 2 14:32:57.870484 sshd-session[5778]: pam_unix(sshd:session): session closed for user core Mar 2 14:32:57.935659 systemd[1]: sshd@22-10.0.0.7:22-10.0.0.1:38872.service: Deactivated successfully. Mar 2 14:32:57.936591 systemd-logind[1531]: Session 23 logged out. Waiting for processes to exit. Mar 2 14:32:57.950324 systemd[1]: session-23.scope: Deactivated successfully. Mar 2 14:32:57.971286 systemd-logind[1531]: Removed session 23. Mar 2 14:33:02.926836 systemd[1]: Started sshd@23-10.0.0.7:22-10.0.0.1:40628.service - OpenSSH per-connection server daemon (10.0.0.1:40628). Mar 2 14:33:03.397547 sshd[5827]: Accepted publickey for core from 10.0.0.1 port 40628 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:33:03.406354 sshd-session[5827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:33:03.432927 systemd-logind[1531]: New session 24 of user core. Mar 2 14:33:03.444238 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 2 14:33:04.076172 sshd[5838]: Connection closed by 10.0.0.1 port 40628 Mar 2 14:33:04.077565 sshd-session[5827]: pam_unix(sshd:session): session closed for user core Mar 2 14:33:04.110394 systemd[1]: sshd@23-10.0.0.7:22-10.0.0.1:40628.service: Deactivated successfully. Mar 2 14:33:04.123738 systemd[1]: session-24.scope: Deactivated successfully. Mar 2 14:33:04.130295 systemd-logind[1531]: Session 24 logged out. Waiting for processes to exit. Mar 2 14:33:04.142721 systemd-logind[1531]: Removed session 24. Mar 2 14:33:06.309758 containerd[1557]: time="2026-03-02T14:33:06.309543614Z" level=warning msg="container event discarded" container=3c1425edf5341fac0b55b6ca889be4d90103d158549ffd3a35b3783be884249c type=CONTAINER_CREATED_EVENT Mar 2 14:33:06.309758 containerd[1557]: time="2026-03-02T14:33:06.309601092Z" level=warning msg="container event discarded" container=3c1425edf5341fac0b55b6ca889be4d90103d158549ffd3a35b3783be884249c type=CONTAINER_STARTED_EVENT Mar 2 14:33:06.721305 containerd[1557]: time="2026-03-02T14:33:06.720861336Z" level=warning msg="container event discarded" container=0fa37db63c62edfdad0d80fef7625b1eb58c539388a15a44239e8757f3dd9750 type=CONTAINER_CREATED_EVENT Mar 2 14:33:07.447708 kubelet[2864]: E0302 14:33:07.447607 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:33:08.190348 containerd[1557]: time="2026-03-02T14:33:08.187903805Z" level=warning msg="container event discarded" container=0fa37db63c62edfdad0d80fef7625b1eb58c539388a15a44239e8757f3dd9750 type=CONTAINER_STARTED_EVENT Mar 2 14:33:09.119716 systemd[1]: Started sshd@24-10.0.0.7:22-10.0.0.1:40630.service - OpenSSH per-connection server daemon (10.0.0.1:40630). Mar 2 14:33:09.145505 containerd[1557]: time="2026-03-02T14:33:09.145439357Z" level=warning msg="container event discarded" container=085b32ea563e463c36b03a3ff6f3ac383c03aea3ec66ac3401a2863600072d4b type=CONTAINER_CREATED_EVENT Mar 2 14:33:09.145838 containerd[1557]: time="2026-03-02T14:33:09.145800921Z" level=warning msg="container event discarded" container=085b32ea563e463c36b03a3ff6f3ac383c03aea3ec66ac3401a2863600072d4b type=CONTAINER_STARTED_EVENT Mar 2 14:33:09.347581 sshd[5914]: Accepted publickey for core from 10.0.0.1 port 40630 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:33:09.357758 sshd-session[5914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:33:09.400649 systemd-logind[1531]: New session 25 of user core. Mar 2 14:33:09.419948 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 2 14:33:10.136828 sshd[5921]: Connection closed by 10.0.0.1 port 40630 Mar 2 14:33:10.143803 sshd-session[5914]: pam_unix(sshd:session): session closed for user core Mar 2 14:33:10.166883 systemd[1]: sshd@24-10.0.0.7:22-10.0.0.1:40630.service: Deactivated successfully. Mar 2 14:33:10.178596 systemd[1]: session-25.scope: Deactivated successfully. Mar 2 14:33:10.190973 systemd-logind[1531]: Session 25 logged out. Waiting for processes to exit. Mar 2 14:33:10.209859 systemd-logind[1531]: Removed session 25. Mar 2 14:33:13.451071 kubelet[2864]: E0302 14:33:13.450725 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:33:15.191641 systemd[1]: Started sshd@25-10.0.0.7:22-10.0.0.1:35074.service - OpenSSH per-connection server daemon (10.0.0.1:35074). Mar 2 14:33:15.425332 sshd[5948]: Accepted publickey for core from 10.0.0.1 port 35074 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:33:15.446704 sshd-session[5948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:33:15.485816 systemd-logind[1531]: New session 26 of user core. Mar 2 14:33:15.500456 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 2 14:33:15.955573 sshd[5965]: Connection closed by 10.0.0.1 port 35074 Mar 2 14:33:15.956933 sshd-session[5948]: pam_unix(sshd:session): session closed for user core Mar 2 14:33:15.984330 systemd[1]: sshd@25-10.0.0.7:22-10.0.0.1:35074.service: Deactivated successfully. Mar 2 14:33:15.998425 systemd[1]: session-26.scope: Deactivated successfully. Mar 2 14:33:16.001393 systemd-logind[1531]: Session 26 logged out. Waiting for processes to exit. Mar 2 14:33:16.012891 systemd[1]: Started sshd@26-10.0.0.7:22-10.0.0.1:35090.service - OpenSSH per-connection server daemon (10.0.0.1:35090). Mar 2 14:33:16.019498 systemd-logind[1531]: Removed session 26. Mar 2 14:33:16.181631 sshd[5981]: Accepted publickey for core from 10.0.0.1 port 35090 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:33:16.189505 sshd-session[5981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:33:16.225783 systemd-logind[1531]: New session 27 of user core. Mar 2 14:33:16.247446 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 2 14:33:17.497702 kubelet[2864]: E0302 14:33:17.497576 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:33:17.663484 sshd[5984]: Connection closed by 10.0.0.1 port 35090 Mar 2 14:33:17.659714 sshd-session[5981]: pam_unix(sshd:session): session closed for user core Mar 2 14:33:17.734712 systemd[1]: sshd@26-10.0.0.7:22-10.0.0.1:35090.service: Deactivated successfully. Mar 2 14:33:17.747536 systemd[1]: session-27.scope: Deactivated successfully. Mar 2 14:33:17.757211 systemd-logind[1531]: Session 27 logged out. Waiting for processes to exit. Mar 2 14:33:17.767521 systemd[1]: Started sshd@27-10.0.0.7:22-10.0.0.1:35100.service - OpenSSH per-connection server daemon (10.0.0.1:35100). Mar 2 14:33:17.791219 systemd-logind[1531]: Removed session 27. Mar 2 14:33:18.094871 sshd[5996]: Accepted publickey for core from 10.0.0.1 port 35100 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:33:18.103493 sshd-session[5996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:33:18.142689 systemd-logind[1531]: New session 28 of user core. Mar 2 14:33:18.162266 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 2 14:33:20.697314 sshd[5999]: Connection closed by 10.0.0.1 port 35100 Mar 2 14:33:20.695717 sshd-session[5996]: pam_unix(sshd:session): session closed for user core Mar 2 14:33:20.721716 systemd[1]: sshd@27-10.0.0.7:22-10.0.0.1:35100.service: Deactivated successfully. Mar 2 14:33:20.729572 systemd[1]: session-28.scope: Deactivated successfully. Mar 2 14:33:20.738858 systemd-logind[1531]: Session 28 logged out. Waiting for processes to exit. Mar 2 14:33:20.752521 systemd[1]: Started sshd@28-10.0.0.7:22-10.0.0.1:52502.service - OpenSSH per-connection server daemon (10.0.0.1:52502). Mar 2 14:33:20.758769 systemd-logind[1531]: Removed session 28. Mar 2 14:33:20.971598 sshd[6027]: Accepted publickey for core from 10.0.0.1 port 52502 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:33:20.979939 sshd-session[6027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:33:21.030510 systemd-logind[1531]: New session 29 of user core. Mar 2 14:33:21.044809 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 2 14:33:22.640543 sshd[6030]: Connection closed by 10.0.0.1 port 52502 Mar 2 14:33:22.646217 sshd-session[6027]: pam_unix(sshd:session): session closed for user core Mar 2 14:33:22.696818 systemd[1]: Started sshd@29-10.0.0.7:22-10.0.0.1:52518.service - OpenSSH per-connection server daemon (10.0.0.1:52518). Mar 2 14:33:22.717675 systemd[1]: sshd@28-10.0.0.7:22-10.0.0.1:52502.service: Deactivated successfully. Mar 2 14:33:22.733712 systemd[1]: session-29.scope: Deactivated successfully. Mar 2 14:33:22.756827 systemd-logind[1531]: Session 29 logged out. Waiting for processes to exit. Mar 2 14:33:22.788799 systemd-logind[1531]: Removed session 29. Mar 2 14:33:22.960932 sshd[6041]: Accepted publickey for core from 10.0.0.1 port 52518 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:33:22.972826 sshd-session[6041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:33:23.024687 systemd-logind[1531]: New session 30 of user core. Mar 2 14:33:23.042658 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 2 14:33:23.617464 sshd[6047]: Connection closed by 10.0.0.1 port 52518 Mar 2 14:33:23.619889 sshd-session[6041]: pam_unix(sshd:session): session closed for user core Mar 2 14:33:23.645481 systemd[1]: sshd@29-10.0.0.7:22-10.0.0.1:52518.service: Deactivated successfully. Mar 2 14:33:23.660358 systemd[1]: session-30.scope: Deactivated successfully. Mar 2 14:33:23.672832 systemd-logind[1531]: Session 30 logged out. Waiting for processes to exit. Mar 2 14:33:23.689277 systemd-logind[1531]: Removed session 30. Mar 2 14:33:28.659413 systemd[1]: Started sshd@30-10.0.0.7:22-10.0.0.1:52530.service - OpenSSH per-connection server daemon (10.0.0.1:52530). Mar 2 14:33:28.935365 sshd[6060]: Accepted publickey for core from 10.0.0.1 port 52530 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:33:28.938947 sshd-session[6060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:33:28.957393 systemd-logind[1531]: New session 31 of user core. Mar 2 14:33:28.967119 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 2 14:33:29.315289 sshd[6063]: Connection closed by 10.0.0.1 port 52530 Mar 2 14:33:29.318528 sshd-session[6060]: pam_unix(sshd:session): session closed for user core Mar 2 14:33:29.327491 systemd[1]: sshd@30-10.0.0.7:22-10.0.0.1:52530.service: Deactivated successfully. Mar 2 14:33:29.335326 systemd[1]: session-31.scope: Deactivated successfully. Mar 2 14:33:29.353555 systemd-logind[1531]: Session 31 logged out. Waiting for processes to exit. Mar 2 14:33:29.363665 systemd-logind[1531]: Removed session 31. Mar 2 14:33:34.369768 systemd[1]: Started sshd@31-10.0.0.7:22-10.0.0.1:42852.service - OpenSSH per-connection server daemon (10.0.0.1:42852). Mar 2 14:33:34.618429 sshd[6101]: Accepted publickey for core from 10.0.0.1 port 42852 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:33:34.621605 sshd-session[6101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:33:34.651868 systemd-logind[1531]: New session 32 of user core. Mar 2 14:33:34.671421 systemd[1]: Started session-32.scope - Session 32 of User core. Mar 2 14:33:35.034342 sshd[6104]: Connection closed by 10.0.0.1 port 42852 Mar 2 14:33:35.038610 sshd-session[6101]: pam_unix(sshd:session): session closed for user core Mar 2 14:33:35.054935 systemd[1]: sshd@31-10.0.0.7:22-10.0.0.1:42852.service: Deactivated successfully. Mar 2 14:33:35.065836 systemd[1]: session-32.scope: Deactivated successfully. Mar 2 14:33:35.078744 systemd-logind[1531]: Session 32 logged out. Waiting for processes to exit. Mar 2 14:33:35.089459 systemd-logind[1531]: Removed session 32. Mar 2 14:33:40.060813 systemd[1]: Started sshd@32-10.0.0.7:22-10.0.0.1:39218.service - OpenSSH per-connection server daemon (10.0.0.1:39218). Mar 2 14:33:40.191827 sshd[6216]: Accepted publickey for core from 10.0.0.1 port 39218 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:33:40.198831 sshd-session[6216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:33:40.224549 systemd-logind[1531]: New session 33 of user core. Mar 2 14:33:40.240801 systemd[1]: Started session-33.scope - Session 33 of User core. Mar 2 14:33:40.519368 sshd[6219]: Connection closed by 10.0.0.1 port 39218 Mar 2 14:33:40.519765 sshd-session[6216]: pam_unix(sshd:session): session closed for user core Mar 2 14:33:40.529605 systemd[1]: sshd@32-10.0.0.7:22-10.0.0.1:39218.service: Deactivated successfully. Mar 2 14:33:40.533707 systemd[1]: session-33.scope: Deactivated successfully. Mar 2 14:33:40.537539 systemd-logind[1531]: Session 33 logged out. Waiting for processes to exit. Mar 2 14:33:40.540651 systemd-logind[1531]: Removed session 33. Mar 2 14:33:42.465549 kubelet[2864]: E0302 14:33:42.465508 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 14:33:42.905224 containerd[1557]: time="2026-03-02T14:33:42.904590666Z" level=warning msg="container event discarded" container=a0fc2971e440ad6abf0e89eb719995f9481b094706f5514839645e39b0b291c1 type=CONTAINER_CREATED_EVENT Mar 2 14:33:43.858932 containerd[1557]: time="2026-03-02T14:33:43.858666817Z" level=warning msg="container event discarded" container=a0fc2971e440ad6abf0e89eb719995f9481b094706f5514839645e39b0b291c1 type=CONTAINER_STARTED_EVENT Mar 2 14:33:45.547246 systemd[1]: Started sshd@33-10.0.0.7:22-10.0.0.1:39222.service - OpenSSH per-connection server daemon (10.0.0.1:39222). Mar 2 14:33:45.674780 sshd[6235]: Accepted publickey for core from 10.0.0.1 port 39222 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:33:45.677883 sshd-session[6235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:33:45.693737 systemd-logind[1531]: New session 34 of user core. Mar 2 14:33:45.699556 systemd[1]: Started session-34.scope - Session 34 of User core. Mar 2 14:33:46.059762 sshd[6238]: Connection closed by 10.0.0.1 port 39222 Mar 2 14:33:46.060505 sshd-session[6235]: pam_unix(sshd:session): session closed for user core Mar 2 14:33:46.071562 systemd[1]: sshd@33-10.0.0.7:22-10.0.0.1:39222.service: Deactivated successfully. Mar 2 14:33:46.074569 systemd[1]: session-34.scope: Deactivated successfully. Mar 2 14:33:46.084854 systemd-logind[1531]: Session 34 logged out. Waiting for processes to exit. Mar 2 14:33:46.089245 systemd-logind[1531]: Removed session 34. Mar 2 14:33:51.089753 systemd[1]: Started sshd@34-10.0.0.7:22-10.0.0.1:39002.service - OpenSSH per-connection server daemon (10.0.0.1:39002). Mar 2 14:33:51.206538 sshd[6252]: Accepted publickey for core from 10.0.0.1 port 39002 ssh2: RSA SHA256:YvdBDTdEI1lli8iGgRc26R2mJamvNBJNeePgmjt42C0 Mar 2 14:33:51.212458 sshd-session[6252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 14:33:51.235661 systemd-logind[1531]: New session 35 of user core. Mar 2 14:33:51.247443 systemd[1]: Started session-35.scope - Session 35 of User core. Mar 2 14:33:51.476966 sshd[6255]: Connection closed by 10.0.0.1 port 39002 Mar 2 14:33:51.477462 sshd-session[6252]: pam_unix(sshd:session): session closed for user core Mar 2 14:33:51.487732 systemd[1]: sshd@34-10.0.0.7:22-10.0.0.1:39002.service: Deactivated successfully. Mar 2 14:33:51.491996 systemd[1]: session-35.scope: Deactivated successfully. Mar 2 14:33:51.496722 systemd-logind[1531]: Session 35 logged out. Waiting for processes to exit. Mar 2 14:33:51.499842 systemd-logind[1531]: Removed session 35. Mar 2 14:33:52.449907 kubelet[2864]: E0302 14:33:52.449350 2864 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"